you're forgetting drivers and that Fermi is a different architecture to GT200b
Fermi could be the Pentium 4 of GPU's, looks amazing on the spec sheet, is pretty crap in reality.
Printable View
Mind validate ur reasoning on ''i dont see fermi being faster than the gtx 295'' ?
You just made a statement.. got anything to back it up? if not, then why wouldnt it be crap? Would be no difference from me saying: ''The earth is flat''
[img] http://img513.imageshack.us/i/fermispecs.jpg/ [/img]
if u look at the fermi specs.. everything is more then doubled from GT200 which is the GTX 280. The GTX 290 is not even made of 2x 280.. So what's ur reason for it not being faster?
Specs show that fermi will be 30% faster than gtx 295 at physx and cuda and other gpu apps. I play video games and specs show it wont be faster than the gtx 295 in gaming
sorry I meant simple reasoning. Just trends I do admit the fermi will be faster than the gtx 295 at cuda and playing batman
So basically, all GPU silicon comes through TSMC until 2011? Ugh... The Great GPU Drought of 2009 will be called The Great GPU Depression of 2010. :(
Globalfoundries roadmap:
http://www.semiconductor.net/photo/1...admap_jpg_.jpg
How could it not? 512 shader processors, GDDR5 on a 384bit bus, almost double specs of GT200. HD5870 was double of 4870X2 and it is just as if not faster than the latter, so if its true for ATI, it can't be true for Nvidia?
You're the biggest fanboy since gosh. :rolleyes:
I don't need to show benches, you keep spouting non-sense and all this bull about 5850 or whatever being faster than GTX 285, no one cares but you. :rolleyes:
It's like you have some kind of inferiority complex. You don't need to keep posting, "MY HD5850 is faster than your GTX 280!!! blah blah blah." and posting negative crap on NV threads.
Fermi does have double specs than a GTX 285, but... From what we know, they have problems achieving high frequencies.
Low frequencies means it will be slightly lower than what Nvidia intended it to be, just like GT200 at release.
I am a fan of ATI's products. I have favored them over NV for a couple years now.
Your fanboy stance in every Fermi thread is over the top ridiculous, childish, anoying, and Xtremely pointless even to another ATI fan like myself.
Just thought I would let you in on that. Good day ;)
all p4 had was clock frequency which was marketed heavily just like ATi does with flops. we are talking about gpu's and if you have not noticed its not that different than previous architectures aside from gpgpu features. getting rid of the mul unit, 200GB/s bandwidth, and more R&D compared to r800 hints that wont happen.
Please stop posting bad/false information. Cypress yields are MUCH higher than 50%. Based on early numbers which are supposedly lower than the true yields, according to some, I believe yields are in the 65-75% range though they unfortunately are slightly lower than the aggressive goals AMD set. Juniper on the other hand is doing fantastic, is
exceeding expectations" and seems to be nearing yield numbers taking over for where RV770 left off.
Trust me, AMD is making bank right now with where the prices are at since the BOM for Cypress is only slightly higher than RV770. Basically they were able to replace the 4870x2 with something that decreases their expense by a good deal, meaning a large margin.
BTW- Who is on a .40nm production process? That is amazing, that's like 1% the size of TSMC's current production process...:p:
Wow, please don't post info when you have no idea on numbers. OEMs did buy up quite a lot of AMD's new inventory but it was not in the "hundreds of thousands."
GF100 will have, roughly, about the same gaming performance advantage as GPGPU, or ~20-40%.
How so? Have you actually compared the REAL specs of GF100...
There are a few places that could actually bottleneck the chip compared to GTX295. I do expect GF100 to be around GTX295 performance and beat it in some cases but not by 30%.
No... 28nm should be ramping at GF before the end of 2010, we probably won't be seeing anything in Q3, though it isn't entirely out of the question, but Q4 seems to be the most probable.
Umm... ATi wasn't the one that started the Flops war, that started a loooong time ago. So how much has been spent on GF100 and R800 R&D? Please share...
Theres a huge difference, everything thing you say is just an attempt to bait people into a useless argument, but instead of using reasoning in the debate, you simply insult them.
I given up simply trying to debate because you use no logic and at best, ad hominem arguments when your not insulting them.
I simply feel to raise my voice( I know it useless over the internet), when you simply are beginning to piss everyone off.
You are completely obnoxious, even people like clairvoyant and Goldbrick and to a lesser extent myself, who are AMD/ATI card holders are getting completely annoyed. Atleast most fanboys have some reasoning in their debates, you just seem like you want to cause a raucous and annoy.
You might have a right to your opinion, but when noone agree's and most your comments involve insulting someone, it just going to be a one way ticket to banville.
Your personality is so sour, its no wonder you been banned from other forums and your own forum is sparsely populated.
he was talking about dual GTX 280s, try more reading and less jaw flapping. We all know the GTX 285/280s are surpassed, but they were perfectly fine to buy for the time they were under $350 long before the HD5870 came out. Also this thread is about fermi, which needs to be more powerful than the 5870 to succeed.
That Roadmap is a bit old.
See page 6 on this PDF from AMDs Financial Day Earlier this month for a newer one.
GLOBALFOUNDRIES Presentation
It looks like the new architecture is more aimed towards GPGPU. That's not to say it wont be different on the gaming side though. Just going by specs, it should be 20-40% faster than the 5870. Pretty much the exact same as last gen.
Honestly, I think that all threads regarding Fermi should be locked unless said thread actually has NEW info. Not info coming from Nvidia's PR of all people. I'm talking actual benchmarks/official specs/release/performance figures. Ever since the 5800's released anything regarding Fermi has been pretty much the same. We've all repeated ourselves for the past 2+ months now :ROTF:
The next generation of cards are going to be bigger and AMD has said it wants to skip 32nm and go straight to 28nm for it's next gen videocard.
They are not going to make something faster than fermi on 40nm using the current architecture.
Consider the increase in speed from going from an HD5850 to a 5870 is only like 12 percent when the specs indicate it should be 30% faster.
Making something faster than fermi if it is the speed of a gtx SLI 285, which is not that much to expect from a new architecture, is going to require AMD to make a big chip using the current architecture(which is against there design philosophy). Or release a refresh which closes the gap but doesn't end up faster(most likely scenario). And put all it's resources into r900 on 28nm. If they do this, r900 will be coming out in 2011.
If they trying to make r900, which is supposedly the new architecture at 40nm, its going to be a big chip and from the 5970 bottle necks, a dual card will be useless or impossible.
AMD has already said it is skipping to 28nm and r900 will be a new architecture. I don't think they are going to put r900 on 40nm because they are running into too many bottle necks.
If Radeon 6000 comes early, they can release it on 40nm first (well why are they offering 40nm early in the first place?), which means it's Fermi's party pooper. I'm not sure if AMD would like it though, they will be killing their Radeon 5000 sales (although we have to admit that they're selling everything they have already. However, getting people to buy twice is better than buying once).
Maybe AMD would let everything go in sync and launch Radeon 6000 with Bulldozer, in other words, 2011. Launching the whole platform is profitable.
These are all just speculations though. I would love to see fermi in action. They better bring something impressive. ATI has Eyefinity (which is awesome IMO, I wish I have a setup myself. Waiting for Radeon 6000 to setup one for myself :D). 3D vision isn't really demonstrable to everyone.
We're still waiting for your Fermi demo, nvidia. ATi demonstrated Evergreen 6 months prior to launch.
EDIT: Perhaps AMD would make Radeon 5000 series chip on GlobalFoundries later, if Radeon 6000 is to come in 2011.
Dude if people from your own "camp" already telling you that something must be wrong. Try not to attack the messenger and attack the message instead.
You would not insult someone on the street over conversation if they would be a fan of other brand of shoes then you. Don't do it here ether.
Something tells me, Fermi will compete with ATIs 6000 series and not the current ones. Makes no sense to release a card that competes with something that is on the market for 6(?) months.
This is just my opinion.
What "camp" am I in? I own both Nvidia and ATI, if I'm in a camp it would be the 30" monitor camp. I want the best performing at my resolution. Right now ATI has the best solution. Fermi is late and with the lengths Nvidia has been going to promote it, it's going to be a fail. I have my hopes that the Fermi refresh will be better, but you never know if the new NV30 is just around the corner. I am disappointed in Nvidia for talking the talk but not walking the walk.
The truth of the matter is ATI is currently on the way up and Nvidia is just standing still.
Exactly. Nvidia looks to be playing caught up with ATI even after Fermi is released.
Amm how? The fermi is a 512 shader card that is very flexible and has a huge edge on evergreen in FP64 FMA. But basically evergreen can do wht fermi can FMA, etc with less flexibility but non the less....
If AMD's approach of doubling the no of shaders continues, HD6xxx may have close to 3200 shaders and slightly less performance than a 5970!!
That off-course means 640 FAT shaders as compared to 512 MPS that fermi has. In the end fermi/GTX 380 will compete with 5970 and 5870...
Fermi's Fp64 will be better for apps, but I'm not sure about games. the console port s don't need much. ;)
GPU apps will need to take off because as it stands at the moment there's too much untapped gpu potential out there for the lazy devs. More encoding program need to start using the gpu, not just one gpu however many a person might have in their system.
That could all be true.
But I am inclined to think that the 6xxx series will be a new arch.
The 5x series was just the same as the 4x, just doubled.
This happened with the GT200 over the old G92's.
Just a double in the specs, which was evident in that the 9800GX2 was about the same as the GTX 280 in the begining until better drivers appeared.
I really hope Fermi is this amzing card NV make it out to be, that way it might be competitive with the 6xxx series from ATI.
As long as it performs well and the price isn't a sky high it'll be a good card.
The main things for me is it's length and power requirements.
I'm tired of having massive towers just so I can accomadate a massive card.
I'd like to somehow see a highend card that's only 9" in length.
But I might just be dreaming there.
Maybe you have noticed, like many others in here XS im not average joe, i have bit more money to put on my hardware than average joe. About 5-15k€ each year on hardware. :)
I belive this will be the thing nVidia is aiming for but untill that in 6months nvidia is going to lose some big bunch of money due HD5970 and HD5870 from ATI/AMD is beating every card they have with less price.
nVidia is going sooner or later drop GTX285 and GTX295 prices heavely in order to compete with ATI/AMD HD5970 and HD5870 in price/performance.
C++ itself is just a superset of C, mostly. Slower. Bloatier. Not suitable for hardware really - due to the nature of it. Basically the "C++" in Fermi will be "C with some C++ influences".
The only real advantage will be C++ libraries. People will still write very C-like code, because it's more efficient.
I'm waiting for GF500, should have Python and Ruby interepters built-in.
nvidia would be best off if they simply shut up. all their blabbering about fermi isn't getting them anywhere, it's just more and more bad pr. they should start talking once they get it in the hands of a legitimate reviewer, and then build hype. for now, the more they talk the more people buy ati.
Completely off topic, but +1
This right here is the essence of the problem of the internet; there is no accounatability. From false advertising to viral marketing to lack of ethical standards and reprocussions, the way the internet works is going to have to radically change or there is trouble ahead. IMO!
BTW, that's nothing against you Safan, just a missplaced observation that seemed to fit well with his post! :D
Now, back to your regularily scheduled program....:p:
nVidia isn't going to drop the price on 285/295, nVidia is going to drop them instead. Cards cost THIS much to manufacture, if they lower the price, they will lose money and that's something stupid to do when the replacements are around the corner.
Yep thats wht i think but non the less i have coded over 2 programs which exceed 200 LOC in C i hope it works in fermi.
The other thing i am not sure of is the speed, how much faster is it than a say 3Ghz c2d in compiling? I hope they bring all C++ libs in tough, programming fermi in C using C++ resources would be a lot of fun.
How have you not read the fermi white paper?
I agree about C++ not so much about dx11 being a douche i mean ya C is popular but DX11 supports directcompute API. I know it is slower than C but still its easier to write for than the lengthy C. The other bright star is openCL even that is slower than C but then again easier to write for.