With what specs since you can predict performance that well...
Printable View
Something around 416SP, 384 bit 1536MB GDDR3, 650mhz core clock and some sugar and spice with everything nice!!!! :cool:
I am not predicting performance based on specs, but based on "strategy", where it should be placed at, imo. It's just an opinion.
I would bet for late Q1/Q2 if "full" GT300 launches indeed at November, but, hey, I don't know a thing.
Nvidia says their architecture scales down well... i don't think ANYONE should be making ANY claims about ANYTHING until ANYTHING happens...
Most of the stuff in this thread is 100% bs... its just one extreme to the next... just don't say anything :)
Your all missing a key fact! Microsoft and Ati are basically becomming partners. The xbox has tesselator technology because it has a ati gpu. Microsoft wants to implement dx as close to and with ati as possible, because its good for both of thier buisinesses. If microsoft benefits through the xbox to have exclusive advantages, it has all the reasons to back ati and thier vision of 3d graphics.
that doesnt match with nvidias strategy though...
historically it has always been their strategy to have parts that beat ati slightly or notably and then have a small to notable premium for that.
like 5-15% more performance for 50-100$ more.
so i doubt they will have a part that comes in between the 5850 and the 5870... the performance diference between a 5850 and 5870 isnt that big, and it makes more sense that they go for a part that still beats the 5870 but just not as much as the full blown gt300.
the further down they scale gt300 the more they lose the advantage of having one big beefy gpu, and it becomes harder if not impossible to compete price wise with a chip like juniper which is half the size...
we might as well shut down the internet then ^^
Hey SAAYA, did you see my post for you a couple pages back on page 21. You gotta see it.:rofl:
522? hehe yeah :D
you forgot to mention that nvidia had gt300 silicon in q1 of this year, which is what nvidia originally claimed :D
and about the soda cans, they actually integrated that game into cryostasis in one of the first levels :D
seriously, you open a door and there is a pile of cans that then collapses, sponsored by physix^TM... wow... great... i feel soo immersed into the gameplay right now! :lol:
its really a shame... i wish nvidia would push for propper physix instead of gimmicks that you can show off easily and patch onto games without much effort, but dont really do anything besides catch your attention for a second or two and thats it...
what ever happened to cell factor btw??? :confused:
Yeah its pathetic whats going on. Im not a fanboy of either camp. I have a 9800gt, Im sorry 8800gt, whats the difference:shrug: but I tend to favor Ati's buisiness ethics. At least they improve the industry for all consumers instead of the nvidian hypocrites. Nvidia, Were saving gameing by making pc games "special" and adding Physx. If they truley ment it, they would do whats best for us consumers and improve games for all platforms, even at a loss. Whats the saying "you recieve 10X what you give away" and I believe every word of it. In fact Id have more respect if they improved games at a loss, it isnt a loss if you recieve something in return, like sabotaging ati performance. If nvidia truley wants Physx to take the world by storm, then give it away and set it free, and open it up to everyone who really needs it. Just by doing this alone could be enough to make it adopted and used by everyone. They could make more money making it work better than anyone else, than being the only one with it, and then the money will come to them. Its really simple aint it?:up:
If I had the cure for cancer what would I do?
a: Sell it by the bottle to be a millionaire
b: Give it away to save millions of lives
I would do b, but thats me, But I bet you I would make more than enough cash giving it away and saving lives.
Think if they marketed physx as folding for cures, and how they donated this technology to better help mankind. That would make me feel so fuzzy inside that Id want to buy a nvidia card! cuz were saving lives here and Nvidia can save more when you use their cards.
Instead of donating money to charity, just buy a nvidia gpu for 50$ or 100$ or more and help find the cures! anyone can help, see how much your helping day to day and make a difference. That sounds great if I do say myself!
Check this out:
http://developer.nvidia.com/object/physx_downloads.html
Literally anyone can download the SDK for free. So why hasn't it "taken the world by storm" as you said? It's proprietary. NVIDIA claims that PhysX, along with CUDA, is open to any hardware should one want to support it. The most likely reason holding AMD back from doing this is the unbalanced performance prospects of such an implementation. They probably don't feel like they need to anyway, since more open standards like Bullet physics will eventually arise to make use of OpenCL.
Of course if AMD was so confident in OpenCL, they might be more enthusiastic to get a working driver out for it, but as it is, even NVIDIA has beat them to the punch there. AMD makes great hardware but they really suck in the software/developer support side compared to the competition.
I was just making a marketing point. It sure sounds a whole lot better the way I spun it, I think so. Like I said donate the technology to the community! but obviosly your happy supporting a propietary standard thats going nowhere fast. Why dont you support something thats beneficial to everyone? There is a difference between letting someone barrow your stuff and just giving it to them to do what hey want! You can only use what they let you, you cant just go and build your own physx ppu and sell it why? because its not free or open!
AMD is already working on getting their opencl certification for the gpu. They have it for the cpu, but the gpu is still waiting for certification from the khronos group. There opencl driver will be released within 2 months!
IMO, the most likely reason holding AMD back from adopting CUDA is the fact of being propietary itself.
If AMD would have adopted CUDA, it would have been probable that CUDA would have been standardized (given the absence of other standards that are starting to appear now.
Then, being a propietary API, everything would be in the hands of their main (only?) competitor:
What if they want to, from a certain version of the API, not allow the use of the latest versions to the competitors, to keep themselves ahead? (Creative and EAX).
The developement of the API would depend only on the criteria of NVIDIA, and they wouldn't have a word on that, also.
Of course, there's what you say about performance, but IMO it would be one of the less problematic here.
There are tons of reasons why AMD isn't interested in a propietary API from it's main competitor becoming the de facto standard for GPGPU, so better if they don't help their enemy to achieve that.:yepp:
About OpenCL, I have seen them very enthusiastic. They have send a driver to the Kronos group that is approvation pendant (for compliance), even before (little before, I know) than NVIDIA. And the same that NVIDIA, they have a pre-release driver for developers with which you can work. They are not publicising so much as NVIDIA, but they are using it more. They are working with Havok, Bullet Physics and Pixelux, for example, to implement OpenCL. So I don't know how is NVIDIA beating AMD in OpenCL ground.
This is not how you run a business. What world do you live in, a fantasy world where everything is powered by love. Should they give away computers to third world countries so they can learn, should they give free videocards to people so that they may fold? Your being completely unrealistic.
If they give physX out to anyone and didn't charge licensing for it, they would have just lost money developing the technology and lost money for buying ageia.
From your lovey dovey perspective, what as AMD done to benefit mankind? They run a business like anyone else.
By some of the points you are making, NV is already starting to save lives because their cards are the best at folding for cures and as a result they are one of the biggest contributers(ps3(rsx) too). You most likely know this too, so buy more NV cards. Honestly, I would find it tacky and relatively deceptive if NV main advertisements mentioned folding for diseases everywhere since we all know this was not their main purpose. It would seem shallow and artificial, because all of us that are not naive would know that NV sells videocards mainly and the folding part is just a bonus.
I don't even see how AMD improves the industry for all consumers, besides being present to present competition for the industry they are in. Something NV does as well.
Also how are you giving it away(the cure for cancer), if your still making money off it. The most selfless thing(and the right thing to do in your opinion) would be to not collect any of the money, any money generated from the patent donated to charity and lastly to do this somehow anonymously so you don't get the credit for it.
Guys, you know that NV does give PhysX out to anyone and don't charge licensing for it (developers or IHVs as AMD...), don't you?
They offered to implement CUDA for free to AMD (and PhysX in the batch), and PhysX it's completely free to license for software developers.
They bought AGEIA to use their PhysX as an attempt of marketing CUDA in the mainstream, gamer territory in it's race to standardize CUDA as the de facto GPGPU standard.
They didn't need money to produce a benefit from their investment, what they need it's CUDA being used.
good point, that actually DOES make a diference... thats the ONLY useful application of gpgpu so far imo...
yepp... they have come a long way and improved their driver support, but its still not great, and still slightly behind nvidia... and their dev support is much worse compared to nvidia afaik...
and farinoco, if cuda wouldnt have been a closed standard, we might have never seen opencl and direct compute... they might have adopted it or made it a v1.1 or 2.0 of cuda... but nvidia was too caught up in promoting themselves and too greedy, trying to lay chains on everybody else...
asking others to trust them in not crippling cuda for them is really quite something after nvidias history with other industry players...
Im sure Nvidia wouldnt make any money selling video cards if they had a life changing initiative to fold for cures.
There has been countless people who have left a impact in this world, some good and some bad. The only ones that matter are the good ones, because without them you may have never even had a chance to live. How many people can you name that have done something good for mankind? Its always harder to remember the good people that made a difference than the the ones who never do.
Why dont you read the buisiness practices of this guy : Percy Julian http://inventors.about.com/library/i...lcortisone.htm
Read about this guy too while your at it: Forrest Bird http://inventors.about.com/od/bstart...rrest_Bird.htm
The thing about physx is it may be open to be used by anyone, but its not a standard and its not open source.
Once it actually becomes a standard and open source... where it can actually be applied in many more games for more than just special effects... you'll see performance increases... and in general just better physics...
Thats what I don't like about physx... it has the potential (even though atm it takes a relatively big performance hit) to do something worth while in the world (not only with gaming) and yet nvidia isn't letting it run free (disabling it in the presence of an AMD card....wtf?)
If nvidia let free physx the gaming world would be a much better place on many levels (literally and figuratively)
They should port PhysX to OpenCL.
That would be awesome for the consumers, and for PhysX itself, but I think it would completely defeat the purpose of the AGEIA adquisition and the PhysX promotion.
IMO, the NV's goal was to introduce CUDA into the mainstream gamer community, so it would be used by as many developers as possible. That way the CUDA support of a videocard would get an additional value in that market, that would be lost if it wasn't required to run PhysX...
So I don't think it will happen any time soon, at least if they don't think that the original goal is missed out anyway. Which could happen if developers start favouring more standard and compliant solutions as other OpenCL accelerated libraries instead of going with the restrictions of using a CUDA only library.
CUDA is a computing architecture. the API for CAL and CUDA are both just C with extensions so they are really both proprietary. the API must have a compiler for both architectures. CUDA is not something you just port to another gpu. there would be really no point because they are the same thing.
Bro, it's not an all-or-nothing proposition^^... at what PRICE ..?
How has Nvidia confirmed this?
Will their Fermi beat a 5890? How about Hemlock..?
What price (again)..?
Nvidia is moving their business model away from gaming and into general compute.. the only thing I've seen is NV conceding the gaming market/segment to ATi as they pursue this wild dream of being a CPU maker..!
Folding@home is great.. but I don't buy my VIDEO CARDS based on general compute speed.
/fail!
As has been stated, its not exclusively for gaming. Its for bigger projects.
Deal with it.
If it is also a very good competitor in gaming...thats just a bonus from nvidias perspective.
I'm obviously talking about the CUDA API, not about the CUDA architecture. The CUDA architecture is something internal, as it's the ISA, and access is granted through other intermediate layers, so nothing to standardize here. It's their propietary API what they want to see used, nobody uses "architectures" directly...
That's exactly the problem, IMO (and it's extensible to the previous gen too).
NVIDIA is trying to reach a new (emerging) market, what they call the HPC market, with their architecture. The problem is that this new market isn't there yet, so they can't split their R+D and chip manufacturing in 2 different architectures and/or product lines, so they have to make 3D rendering chips (the current market) that are good for the HPC market.
That eats transistors, developement of the architecture, and so. So the resultant chip isn't specialiced to the 3D rendering market, and has difficulties to compete with other products that are (in an efficiency performance/features to cost manner).
I'm starting to think that's the main reason of the weak performance to size ratio of the past generation, and I'm starting to think that we are going to see a repeat in this one.
That could be a way to describe what's going on with nV lately, using a hybrid solution in order to explore both markets (gaming and gpgpu) with one product.
If they stand in both markets, I bet the gpgpu market will have it's own line of chips and the gaming market will have it's own too. Maybe we are far from that, maybe one or two generations for that to happen... It depends on many factors, but the most relevant of all, imo, is Fermi's success.
You'd have to use a crappy resource hogging AV program anyway to need GPGPU acceleration, don't see the point in it. :shrug:
everyone knows that Nvidia Fermi will Beat ATI Evergreen but the important part is how much would it cost.
The gaming market is what got them to where they are. I'm glad that they are trying to enter new markets, GPGPU has so many possibilities. But what they can't do is enter those new markets at the expense of the market that really supports them, gaming.
If they have poor price/performance or power/performance for games then those of us who use video cards primarily for gaming may pass on it. GPGPU needs more killer apps developed on standards and then it will become more of a decision making factor.
I'd say they can... as long as they price their products acording to what they offer to the consumer. Think about last generation, for example. Once the HD4000 series was launched, GTX200 series adjusted their prices accordingly.
Of course, given the higher cost of GTX200, that translated into economical loses for them, but that could be seen as an investment on the new market they are aiming to. If they have the funds to assume this investment...
Then, if everything goes well, and the new market grow as expected, they can split both product lines, normalizing the situation, and try to make the investment in the new market profitable.
Of course, if they have gone into this too soon, or they have overestimated the profitability of this market, or underestimated the loses they are going to face, they could find themselves in a complicated finantial position... maybe they're risking because of the Intel arrival to the market with Larrabee, I don't know...
I'm not an expert on this matter, but I think I see some logic in all this (that may be completely unrealistic of course :D). Good or bad decision, I don't know... too much for me. :yepp:
I think it's reasonable to say: If Fermi fails, so does nvidia, by how much depends on how much of a miss it is. If it succeeds it's business as usual, at least from a gaming and enthusiast perspective.
Ever since the specs were released, this thread has been nothing but people repeating the same thing over and over for the past week or two.
IMO it should be locked until more concrete info is leaked/released. AKA closed for a month or two cause that's how long until we get some actual new news on gt300.
even with the specs, we cant really tell how these cards perform until they are run through a bunch of benchmarks, becuase they are too different than existing cards.
historically though, newer graphics cards tend to be better than the graphics cards that already are on the market. its somewhat rare to see exceptions to that rule.
thats just pure denial. you can get an approximation of performance based on all of the whitepapers and specs that have been released. no one said anything about "mind blowing performance". its fairly obvious that it will beat the 5870 even if its only 60% faster. thats based from the increase in bandwidth.
We can't know for sure, but I'm completely convinced about GTX380 (or whatever they name it) being faster than HD5870. By how much is other story, and how much room they will have between HD5870 and GTX380 to place a GTX360, too.
We're talking about a monstrous >500mm^2 chip, and even if they are focusing more on the GPU computing side than on the 3D rendering side, Fermi should perform better than the much smaller Cypress. Better not be otherwise if they don't want to be in serious troubles, since it's a much more costly chip.
I can't believe people think about how GT200 was abnormally priced at launch and was completely s//tstormed by 4870, and then say that this is going to be another flop like that.
Let me add some IQ (not image quality, INTELLIGENCE QUOTIENT) on the table: Nvidia, just like us, had no idea that 4800 would be so powerful and they priced everything super high. After the launches, the dust settled in, and Nvidia adjusted the prices accordingly. While I think 4700/4800 series continued to be a better pick than GT200b's, it wasn't a knockout after the price adjustments had been made. So the "total flop" nature of the initial G200's was because Nvidia was caught their pants down by the 4800s.
Now 5800's have been released and Nvidia knows both its price and performance and WILL adjust the Fermi boards' price accordingly. In no way Fermi will be a $500 board that performs on par with the (then will be) $300ish 5870. Yes, the chip is super big and most likely this isn't going to be a super launch season for Nvidia, but it's not just going to be a super expensive board that will barely match the competition's half priced board, like the G200's were initially.
why does everyone want Nvidia to fail? All it means is that your AMD card will be more expensive next time.
They don't.
I personally wish nvidia would stop all this IDIOTIC behaviour they have inflicted over the past two years.
Renaming the same product multiple times as new material, bad vista drivers for which they patsied the blame.
Marketing the infamous whoopass event. Having faulty designed products (hl solder bump issues) and denying it even after it's exposed they were well aware a year before that point.
Restricitve physx behaviour, physx indeed being snake oil moonshine faker that does PHYSICS! ...at the expense of major performance drops.
We had physics for years before physx with none of this strategic performance limitation to make new cards relevant
Taking jibes at intel. Slowing down direct x api advancements
They should take a stance of a professional company sometime, earn that respect they lost
I wish Nvidia would stop trying to be Intel or AMD and just concentrate on gaming... Jen-Hsun Huang is delusional in thinking Nvidia has that much clout. They need to stick with their CORE BUSINESS, CUDA doesn't matter to 99.9% of the populace!
But Nvidia has hung their entire business hat on it's acceptance, dumb!
I'm not even going to bother explaining what is wrong with this. If somebody else wants to have a go, be my guest.
OCN is full of blind AMD/ATI fanboys who have no idea what they are talking about. XS was more immune for a while, but the cancer has started to rapidly spread here as well.
ATI now has AMD. What does that mean? x86 license. Intel will soon have Larrabee, and obviously x86.
Nvidia? No x86.
Intel and AMD now both have plans to integrate GPUs on the CPU die.
See a problem here? Nvidia has to get a foothold in the CPU market somehow and they only way that will happen is with GPGPU. And since they will need to rely on it much more heavily than ATI, it is a lot better.
Many developers want to use CUDA except their applications require ECC, which GT200 lacks. Fermi fixes that and many other problems, and dramatically increases double precision performance.
mate it wont beat it by 60% i promise you.
I kno you want them to do well for w/e reasons.. but seriously dont over-hype will end up being dissapointed.. theres a reason nvidia isnt going on about game performance and thats got me worried (and should do to you also.)
(next bit not aimed at you)
just they keep going on about all this stupid precision stuff theyve improved.. its a graphics card.. people want it for games.. im sorry thats the market they cater for.. going off trying to do a different thing is a bit stupid.. sure by all means go for physx.. thats fine.. just I believe they are forgetting the core market... which is lame....
completley agree seeming there market is gamers.. im not sure people are like WOW I WANT THAT ... just to encode a movie in 10 mins as apposed to 20 .. im sorry people may use it and be like oh this is cool... but its hardly a defining feature come on nvidia.. lets see something good from you again like the 8 series cut the crap with this double floating point performance poop..
I completley agree.. but at the end of the day.. it CANT be a replacement for cpu.. and i feel there going to fall flat on their face in the long run :/ you say all this stuff but think when its ACTUALLY useful say a few years time.. amd will have "apu" (gpu) on their cpu die suposedley and im sure intel will have come up with something allowise they will loose a lot of laptop/netop market imo (if it's implemented well) I just can see the performance nvidia COULD bring to the table... but I just think it will be a hassel for unless they put a lot of money in and i think it'll end up being a money sink...
the end of the day we WONT have decent high end graphics solutions on a cpu, unless something new comes along i mean now where are close to 300w on gpu's slap that in a cpu people will just call you a tard etc...
I dunno I think nvidia will always have an awesome share of the graphics market.. but not if they go venturing on this little crusade cos imo' until they get a x86 liscence.. its just going to waste money :/
(if you believe different please say, you probably noticed.. or not... ill listen to your opinon and happily correct myself or you if theres proof or you convince me :p: etc.. :) )
lol ...
read my edited bit cos i missed out your post.. i wanna see what you think of my ACTUAL views about them being more and more general processing and less gaming?
-edit again... i think i put my points across very bad haha-.... I get what your saying, i just think it'll be a money sink cos' at the end of the day it isn't a cpu maybe be able to help out in a few odd bits here and there.. but yeah with small gpu's going onto cpu. amd/intel could then do the same thing nvidia are doing. then all they have will be there graphics cards etc again .. :/ I think to save time and money they should just focus on graphics/physx and the like
How have are they "forgetting the core market", what have they done to make you think gaming & discrete video in general isn't an important segment.
They are expanding and working to create new markets that have not yet matured. If every inventor stopped every time somebody said that won't work or nobody's going to need that we wouldn't be where we are today.
Luckily the world isn't full of pessimism and there are always people that can see the light at the end of the tunnel looking for new ways to do the same things we do today better, faster, cheaper.
Its just EVERY bit of info so far i've seen doesnt say anything about GAMING .. thats all maybe its jsut poor marketing or their holding it off but meh... seems silly as you an I know the people buying it, are going to be using it for games not to accelerate flash or we :/... make sense :D??
I agree, but my post one abiove yours (another edited bit) youll see my views on that i.e its a good idea, but in the long run amd and intel can kinda just block them out by just doing it on the apu/gpu on the cpu sorta thing.. and tbh nothing stopping them I just think althought its a good thing to "adventure" like this. Its just a bad looking one for the future i may be wrong but meh??? I think you might see where im coming from though
I'm not being pessimistic atall imo i like change.. i loved the 8 series + the 4 series they where both awesome same with say amd 64 owning then Pentium 4.. I just believe this will fail the whole general processing stuff.. just for the pure fact nothings stopping amd and intel from doing it but just in a few years time after they have it on die, cos imo once it's on there and say accelerating the program its still going to get a massive improvement over the normal cpu's but i dont think people will purchase a card to do something the cpu can do pretty much just as well (in the future i have to add) :shrug::shrug:
60% was relative to a gtx 285 and it was to get zalbard to understand my point about theoretical performance.
review websites skew their results by adding a lot of AA and AF. its not an apples to apples comparison if you ask me. we need a gpu benchmark similar to cinebench. all a 5870 can do is apply more AA to current games and that doesnt show the true power of the card,just the ROP's. i dont think games will benefit that much from new cards until crysis2 or next consoles. a 5870 makes a xbox 360 look like a wii. you basically have to put up with crappy textures and high fps for a while.
the market for gpu computing is estimated to be 50% of the desktop market. dont ask why ask why not. this is a link to cuda home page and all of its applications. its more than just encoding. there are a lot of real world uses for CUDA. just not at the desktop yet. in the future when we are ray tracing, maybe. fermi is a gpgpu with fixed function graphics abilities just like all dx10 gpu's. i dont see why this would affect d3d performance.Quote:
just they keep going on about all this stupid precision stuff theyve improved.. its a graphics card.. people want it for games.. im sorry thats the market they cater for.. going off trying to do a different thing is a bit stupid.. sure by all means go for physx.. thats fine.. just I believe they are forgetting the core market... which is lame....
completley agree seeming there market is gamers.. im not sure people are like WOW I WANT THAT ... just to encode a movie in 10 mins as apposed to 20 .. im sorry people may use it and be like oh this is cool... but its hardly a defining feature come on nvidia.. lets see something good from you again like the 8 series cut the crap with this double floating point performance poop..
http://www.nvidia.com/object/cuda_home.html#
Are you saying that more than 1% of the population is Folding@home..?:rofl: Or, having sleepless nights waiting for the next greatest compute GPU..? :shakes: Or have $499 to spend on a video card...? Thats why many of us are writing off Nvidia, because their drastic move toward non-gaming leaves many within the industry wondering about the COMPANY...
Secondly, almost all the people who fold (or will use these added Fermi "features") are right here on XS... everyone else doesn't give a rats ass about CUDA. <--tis da truth
For a small example demographic; there are some 12 million people who play World of Warcraft, none of them know about CUDA... nor do they care or even have reason to care. All that matters to people who buy video cards, is how well they run their computer's graphics & games.
Understand, CPU is for calculation and the video card is for graphics... Tesla only speeds up compute calculations, nothing more. People don't buy video cards for that...
Finite element analysis, high-precision scientific computing, sparse linear algebra, sorting, etc may drive Fermi sales, but how does that benifit the end-user here? Specially, if in the near future, people can just slap a larrabee on a hydra board. Then you have what Nvidia is offering.. if you need it.
agree with the first bit, im hoping for a bit more.. imagination from their next architechture next year..
and I agree i think it can have it's uses ... I think like... say how enabling physx makes cpu score amazzzing on vantage... you could make gpu kick in and give a nice boost to various apps... all im trying to say is i think when amd/intel get it on their cpu then they will just end up blocking nvidia out THAT potential market :/ ofcourse they wouldnt disapear cos its always nice to get a lil bit more of a boost. :)
I would say they are trying to get a leg up on the competition with their heavy Tesla push. If they can actually get their gpgpu initiative to take root first with developers they can very well be a market leader until the other players catch up.
There's definitely no guarantees of landslide success but the potential market is there if they can get enough developers aware of & on board to utilize the technology. It may not necessarily be huge with the general user but say the medical & research fields this could be a huge chance for them to make in roads into untouched markets with potentially much higher margins.
Nvidia is essentially getting squeezed out of mainstream markets with chipsets pretty much dead in the water they need to find other ways to sustain their growth or else go the way of VIA.
Their best chance is to move quickly now while they have the resources and influence to still make big moves.
I don't think they're forgetting their roots at all but are trying hard making more noise about their gpgpu initiatives to bring about awareness and spark curiosity & imagination with potential developers & customers which seems to be working on the awareness side at least.
We'll have to wait and see which is the hard part in the I want it yesterday technology world.
ok I understand what your saying .. pretty much people KNOW they do graphics cards etc.. and will be good for gaming etc.. but trying to push the new stuff now so they have a big/better etc.. chance in the long run! :)
it does make sense to me personnally, but still worried like you said they are being squeezed out of the other markets, (which is one reason im peeved about physx.. sure people keep arguing we could do alot of the stuff on cpu but its only just came into the spotlight give it time and im sure we will need something decent to run physics etc..) and i said they may be able to get a big push forward but I cant help and think peopel will opt for open CL, i mean cuda runs it ati runs it processors run it, i believe it will be easier for developers to go that route, which is why i think nvidia will have to splash some cash...
I do hope they made the right decision... if it works it will be a huge relief if not.. then i guess ill start worrying when the time comes :)
Jowy Atreides, the renaming was only a side effect of what really went on... they had problems with several product refreshes and new chips, since those were delayed or failed, they had to rename the same old parts and shrink them... the real problem is that their refresh and new gen roadmap was too ambitious and way out of touch with reality... that plus they didnt have propper plan B options many times, which is something you should have in a competitive market where projects DO fail every now and then...
with 40nm ati played their cards very very well... first they stuck their little toe in the water to check the temp, rv740, then decided its too cold and prepared to stand on two legs, a rather small juniper, same speed as previous gen but cheaper and cooler, and a bigger and beefier cypress... even if cypress would have failed, they could have come up with a dual juniper card to tackle the 295, and juniper is fast enough to force 285 prices lower as well even if there wouldnt have been cypress to outperform it...
compare that to nvidias gt200 strategy, theres gt200... and thats it... :confused:
and gt300 so far looks to be the same thing, there will be gt300... and then thats it... they said they will cut it down, but when?
even if gt300 arrives in small numbers in december, then what? g92 and gt200 are both outperformed by juniper and cypress respectively, and whats the point of having a halo product if there is nothing else to sell but an expensive halo product?
its a shame, we are already seing higher prices for juniper and cypress than we would have if nvidia had a 40nm gt200 or something else to compete with atis new chips... :/
So you want NV, to just die a slow death? Well of course you do xoulz, you hate Nvidia.
Nv is simpy trying to make outlets for cash since, it going to be left out in the cold alot more when everyone integrates GPU on onto a CPU, those slowing down cpu sales.
Adding to the equation is Larrabee, which is invading NV retail space and NV has to do something, just in case larrabee does destroy everything out their
By your logic, AMD, although a cpu company had no damn right to even challenge Intel as a CPU company based on differences in the size.
Do you think Intel has no right to enter the GPU market, because they are a CPU company too?
Man people really hate NV. But when you look back at it, Intel has done some much shadier stuff than both NV and AMD. Yet people love them. Bribery and using clout to make sure your CPU have an unfair advantage, Intel wrote the book on this.
:banana::banana::banana::banana:y marketing? AMD and Intel have both done this. Amd's most recent slides were taking shots at NV like no tommorow. They have essentially bought Charlie(his website contains only AMD ads) and are using him to push their propaganda. Don't you think it is dirty to start rumors to lower your competitors stock price? Even NV hasn't sunk that low yet.
The whole renaming thing got a little carried away but really ripped no one off, because most people didn't rebuy the same card contrary to what the haters think and they typically always dropped the price of their card and put it in a lower target segment when they did this.
Sure it removes a bit of focus that the card is not a direct x 10.1 card, but really did people miss that much from not getting direct x 10.1. If anything this hurt NV more because it made the transition to direct x 11 a bit more difficult.
See the problem with all of this is actually very simple.
Most of us buy video cards to play games and that's it. I don't give a rat's butt if it can do anything else.
So forgive me if I'm less than impressed when Nvidia comes out and starts trying to sell me their products based on non-gaming related features. I just don't want or need them. :shrug:
Well there is this, new cards from Nvidia. Yea, that will do.
You talk as if it will be slow or something. MOST of the transistors come from more than doubling the shaders to 512 from 240, and that alone will bring the biggest performance in games.
GT200 had 240 SPs and 1.4b transistors.
Fermi has 512 SPs and 3.0b transistors.
Guess what? The ratio transistors to SPs on Fermi is virtually the same as GT200. What does that mean?
All the people saying "omg all those transistors are wasted on GPGPU" are freaking morons!!! The GPGPU features on Fermi are at no extra cost to gamers, and game performance is not sacrificed.
To calculate the ratio of transistors to SPs is to divide transistors by SPs. This figure is virtually the same on Fermi as it was on GT200. Other factors don't change that.
Now, when you do factor in ROP counts and bus width, among other things, it is higher on Fermi but not by any significant amount.
Nvidia needs to double down and split their hand. Make a dedicated PPU and make a seperate gpu. They can still support cuda on the gpu, just not make it the one and only thing its made for. That would free up silicon and make them competitive again. There is no point going in the direction there going right now because its a dead end. You cant be competitive in 3d graphics if your not making gpus.
nvidia has alienated both x86 licence holders, moved away from the chipset biz, ran into a wall with ion (intel not selling atoms without chipset + pineview coming) and put an awful lot of eggs in the GT300 basket. Everyone SHOULD WANT nvidia to succeed with Fermi. No other way around it. If it fails, we are screwed...
Dude, do you know how to read? Like I stated, there is no problem with keeping cuda on the gpu. They only need to add enough performance to make games that use it run good and smooth. 3d games today dont need the amount of gpgpu power as a server farm. That way they can focus on making efficient 3D graphics cards in a competitive market. Making huge dies that cater to the scientific/workstation market and selling them to consumers to browse the internet isnt a smart buisiness plan. If I need the power of a server farm Ill buy a gpu that can handle that, but if I want to play games, give me something to do just that and do it well for a good price! The only way to do this is to make different cards for there purposes. Look at ati they have 2 different dies to serve different markets. How is gt300 suppose to be competitive with juniper, a card that costs 1/5 to make and a 1/4 of the size. You dont have to be a genius to see whats wrong here, but oh yeah your dislexic. So here is a way to make you understand better, Nvidia has bigger dies and costs alot more to make. That way they can make more money selling the same big card against a tiny ati card.
Cost aside, Fermi will tear Juniper to shreds, both in gaming and GPGPU. It will beat the 5870 as well.
Consider. GT200 can execute 33.3% more dependent instructions than RV770, and it is typically faster across the board.
Fermi will be able to execute 37.5% more dependent instructions than Cypress-XT, and the efficiency at which they can be executed is greatly increased. Cypress-XT is not much, if at all more efficient than RV770.
Therefore it can be expected that Fermi will be faster than Cypress-XT by a greater factor than GT200 was the RV770.
Thats fine and dandy, but you dont need a missle to swat a mosquito. Gt300 only serves the highest end of the market with no room left below it, sure they can harvest dies, but it wont be any way near as profitable as a juniper is. In buisiness this is taboo and worse than blasphemy. ALL nvidias AIB's will jump ship because there is no money to be made.
You think XFX jumped ship because they were makin money over fist selling nvidia parts?
totally agree! :toast:
does intel try to stuff lrb into their cpus?
of course not! but nvidia is basically stuffing their attempt at something like lrb into their gpu...
only 23W? 0_o
well even if it succeeds... then what? theres an expensive super vga that can run 2560x1600+ with 8aa or even 16aa... you need a 30" display for 1000$+... how does that produce enough income for nvidia to survive? what they really need is a gpu thats anywhere from slightly slower to faster than rv870 at the same or lower price... not a super fast chip that costs a lot more... im really confused why they dont just cut gt300 in half and have it almost immediatly after gt300s launch... it cant be that hard to scale down a big gpu...
well and then theres still tegra, but so far arm isnt taking off, and without arm taking off... tegra will have a hard time as well... and im curious how good it actually is compared to the integrated power vr chips arm usually integrates...
if they sacrifice performance of what its actually meant for to get those features, and the features are only gimmicks, then yes...
would you think its a good idea for a ferrari to somehow sacrifice horse power to power a cokctail mixer inside the car? thats kinda what nvidia does with cuda...
It doesn't work this way. There's a lot of stuff in a GPU apart from processing units (SPs, TUs and ROPs), a lot of logic, and usually doubling the number of processing units doesn't translate into doubling the number of transistors. For example, going from RV670 (320SPs, 666 million transistors) to RV770 (800SPs, 956 million transistors) meant an increase of x2.5 processing units (not only SPs were incresed by x2.5) and only x1.45 transistors.
If the ratio of processing units to transistors is kept, that means that lots of other stuff are being added or increased. Most of this changes, in the case of GT200 formerly, and Fermi now, are on the GPU computing ground. And a great deal of them, on GPU computing features that aren't focused on mainstream GPU computing (which could end up being useful to the mainstream consumer) but on what they call "the High Performance Computing" market (which they cover with their Tesla solutions).
Even if you use all the insults in the world, that doesn't changes things. Nothing is free in a processor, nothing gets calculated magically. There need to be transistors in there, and if NVIDIA are adding lots of HPC stuff to their chips, there is going to be a cost in transistors and in die surface regardless people being more or less moronic.
And when you feel tempted to call someone "a freakin moron", you should think twice, and not doing it in the end. You know, it's not like if it was a useful argument, or a useful attitude.
EDIT and PS: I'm not trying to say that Fermi "will suck" as a 3D rendering device, or that it's going to be exactly equal than GT200. I'm not a fortune-teller, and using logic, I suppose that part of the changes will be benefitial also to 3D rendering tasks. But what it's obvious is that NV is focusing a lot on the HPC market lately, and they are including lots of features based on that market on their chips. And that features, will have a cost in transistors and surface. Even if you want to call "freakin morons" to the people who thinks so.
Eidos programmer: "Hey, we've got a great idea for an awesome new game!"
NV TWIMTBP rep: "Well sparky, let's not get too crazy. What's your idea?"
Eidos programmer: "Well, we all know you like to pervert a normal game by adding PhysX and in-game AA only for your cards so we have a great new game idea to do just that again."
NV TWIMTBP rep: "Cough, cough..well, well, let's not screw the other 50% of gamers again..we have a new idea."
Eidos programmer: "What?"
NV TWIMTBP rep: "Let's screw 100% of the gamers! Yes, we'll pay you $100,000 to not make that crappy game and make a scientific app instead.
Hell, we'll even throw in the programmers for you."
Eidos programmer: "WTF?"
NV TWIMTBP rep: "Well now that 'gaming' is of secondary importance to us, so are you!"
Eidos programmer: "Gulp!"
NV TWIMTBP rep: "And if you DO decide to make that game you really want to make without the green goblin god's blessing, you better make it using double precision, ECC and only CUDA cores, period!"
Eidos programmer: "Oh &%#$!"
NV TWIMTBP rep: "Have a nice day!"
NVidia - "The Way It's Bent To Be Played(we mean executed)"
If Fermi succeeds (we are talking about performance/$ here, scalability and margins) then nvidia will have the tools to offer derivatives based on the arch for lower market segments. Fast. I think that they canceled a GT200 respin due to the fact that it would not be very competitive (as far as margins are concerned) so this launch will lack full range market coverage for their new line.
If Fermi doesn't succeed in the 3d gaming market, nvidia will be in a lot of trouble due to the fact that the HPC market is miniscule atm (and is expected to be so for the immediate future).
Look, I'm the first that said I can't understand why nvidia did not put a tesselation HW unit, or a double rasteriser there. Especially since the transistor penalty would be low. From what I understand, we may see in the future R&D on 2 dies, one for the gaming segment and one for HPC. The magic word here though, is arch SCALABILITY. Going for the performance crown is the same to shipbuilders going for the Blue Riband, profitability be damned. It will give you the marketing juice needed to boost the rest of the line, but with no line to boost and the market shifting to other price segments its a moot point. Nvidia should better understand this and fast...
Yep with no HW tesselation i am not sure how the card can be FULLY Directx 11 compatible??? Simulated tesselation cant be as fast as HW tesselation. DX11 tesselation is very interesting hell it was interesting since the time ati introduced it.
I also agree with users worrying about no HW tesselation. Tesselation is one of the big or maybe the biggest feature of DX11, smoother surfaces, better models. I do not understand why they didn't implement DX11 at it's fullest.
It may bite them in the :slapass: or not, we will have to wait for AvP to see what this lack in feature means.
It reminds me of FX 5800 vs Radeon 9700 Pro all over, when the FX series didn't support properly DX9, which was seen later when games like Oblivion came which required SM 2.0 and a lot of effects which could be enjoyed on the 9800/9700/9600/9500 series couldn't be enjoyed on the FX series without a serious performance hit.
But who said they are not implementing DX11 or tesselation at its fullest?
DX11, as an intermediate layer between hardware and applications, it's an interface than offers a certain functionality (with a given interface) in the side of the applications, and that asks the hardware to implement certain functionality (with a given interface) to be compliant with the API.
What the hardware does internally to execute (and resolve) that functionality, it's its own business, while it picks the right ins and gives the right outs.
The NV's decision of implementing tesselation completely as operations in the CUDA Processors instead of having dedicated hardware (a dedicated set of transistors) to solve it, it's a technical decision. We will see what it means in real world when we have numbers.
Usually not having dedicated, fixed function hardware for a task, mean that you have to use non specialized hw to do it, so you usually take more resources to do it. But it has its benefits. Dedicated fixed function hardware do take resources too (it's made of transistors that could be used to put in there some more general computing resources). And it rises the complexity and costs of developement.
We will see what this implementation choice they have made means in the end when we have data... probably it will take a higher performance impact when doing tesselation than Radeon cards, specially with shader intensive games, but maybe it's not that noticeably.
your right. But Cuda means another software layer between directx and execution, which i don't think it's beneficial.
Of course, everything goes to a more non specialized way to execute commands, but if that would be done through a single software intermediary, like Directx, for all GPUs, that would be ok. When you add another one, CUDA, than that doesn't make that much sense.
I don't think it works that way. I'm not an expert so excuse me if I'm wrong with this, but what I think it happens it's that the drivers of the hardware device (the videocard in this case) exposes an interface which includes the functionality required for the APIs they are compliant to. Then, the goal of the drivers it's to convert that calls into the hardware instruction set in order to have the hardware doing its work. The difference between having dedicated fixed function hardware and doing it in general computing processors as the CUDA processors probably is that in the former case you would have specific hardware level instructions to manage the dedicated fixed function hardware and do it there, and in the other case you would have to use the generic instructions of the generic processors to solve it. I don't think there's going to be any additional layer anywhere.
the layering usually is
Application
Direct
Drivers
video card
If you do tesselation through cuda, you have
Application
Directx
Drivers
Cuda
video card
With hardware tesselation, the hardware already knows how to do the job, you don't need another software (in this case Cuda) to tell the shader how to do it, the shader just receives the data and does the math.
@Farinorco implementation of physx over cuda is not as efficient and effective as the real thing "PPU". Nvidia gpu loses more than a few shade units when it simulates the PPU
EDIT:- Why is Nvidia not co-operating with VIA??? I am very confused as to why nvidia does not support VIA in designing chips as well as financially. VIA can then make a bigger chip and multi core it to rival low end semptron's/ celeron's.
I had a VIA SoC and it was very good based on a C7 it was a bit slow but it was quite good. A similar SoC with a larger core and multi cores can rival low end intels/ AMD's such as semptron's/ celeron's.