PLease point me to a POST I made that is insulting.
Well I guess I most have had it wrong.......
Printable View
The next generation of cards are going to be bigger and AMD has said it wants to skip 32nm and go straight to 28nm for it's next gen videocard.
They are not going to make something faster than fermi on 40nm using the current architecture.
Consider the increase in speed from going from an HD5850 to a 5870 is only like 12 percent when the specs indicate it should be 30% faster.
Making something faster than fermi if it is the speed of a gtx SLI 285, which is not that much to expect from a new architecture, is going to require AMD to make a big chip using the current architecture(which is against there design philosophy). Or release a refresh which closes the gap but doesn't end up faster(most likely scenario). And put all it's resources into r900 on 28nm. If they do this, r900 will be coming out in 2011.
If they trying to make r900, which is supposedly the new architecture at 40nm, its going to be a big chip and from the 5970 bottle necks, a dual card will be useless or impossible.
AMD has already said it is skipping to 28nm and r900 will be a new architecture. I don't think they are going to put r900 on 40nm because they are running into too many bottle necks.
If Radeon 6000 comes early, they can release it on 40nm first (well why are they offering 40nm early in the first place?), which means it's Fermi's party pooper. I'm not sure if AMD would like it though, they will be killing their Radeon 5000 sales (although we have to admit that they're selling everything they have already. However, getting people to buy twice is better than buying once).
Maybe AMD would let everything go in sync and launch Radeon 6000 with Bulldozer, in other words, 2011. Launching the whole platform is profitable.
These are all just speculations though. I would love to see fermi in action. They better bring something impressive. ATI has Eyefinity (which is awesome IMO, I wish I have a setup myself. Waiting for Radeon 6000 to setup one for myself :D). 3D vision isn't really demonstrable to everyone.
We're still waiting for your Fermi demo, nvidia. ATi demonstrated Evergreen 6 months prior to launch.
EDIT: Perhaps AMD would make Radeon 5000 series chip on GlobalFoundries later, if Radeon 6000 is to come in 2011.
Dude if people from your own "camp" already telling you that something must be wrong. Try not to attack the messenger and attack the message instead.
You would not insult someone on the street over conversation if they would be a fan of other brand of shoes then you. Don't do it here ether.
Something tells me, Fermi will compete with ATIs 6000 series and not the current ones. Makes no sense to release a card that competes with something that is on the market for 6(?) months.
This is just my opinion.
What "camp" am I in? I own both Nvidia and ATI, if I'm in a camp it would be the 30" monitor camp. I want the best performing at my resolution. Right now ATI has the best solution. Fermi is late and with the lengths Nvidia has been going to promote it, it's going to be a fail. I have my hopes that the Fermi refresh will be better, but you never know if the new NV30 is just around the corner. I am disappointed in Nvidia for talking the talk but not walking the walk.
The truth of the matter is ATI is currently on the way up and Nvidia is just standing still.
Exactly. Nvidia looks to be playing caught up with ATI even after Fermi is released.
Amm how? The fermi is a 512 shader card that is very flexible and has a huge edge on evergreen in FP64 FMA. But basically evergreen can do wht fermi can FMA, etc with less flexibility but non the less....
If AMD's approach of doubling the no of shaders continues, HD6xxx may have close to 3200 shaders and slightly less performance than a 5970!!
That off-course means 640 FAT shaders as compared to 512 MPS that fermi has. In the end fermi/GTX 380 will compete with 5970 and 5870...
Fermi's Fp64 will be better for apps, but I'm not sure about games. the console port s don't need much. ;)
GPU apps will need to take off because as it stands at the moment there's too much untapped gpu potential out there for the lazy devs. More encoding program need to start using the gpu, not just one gpu however many a person might have in their system.
That could all be true.
But I am inclined to think that the 6xxx series will be a new arch.
The 5x series was just the same as the 4x, just doubled.
This happened with the GT200 over the old G92's.
Just a double in the specs, which was evident in that the 9800GX2 was about the same as the GTX 280 in the begining until better drivers appeared.
I really hope Fermi is this amzing card NV make it out to be, that way it might be competitive with the 6xxx series from ATI.
As long as it performs well and the price isn't a sky high it'll be a good card.
The main things for me is it's length and power requirements.
I'm tired of having massive towers just so I can accomadate a massive card.
I'd like to somehow see a highend card that's only 9" in length.
But I might just be dreaming there.
Maybe you have noticed, like many others in here XS im not average joe, i have bit more money to put on my hardware than average joe. About 5-15k€ each year on hardware. :)
I belive this will be the thing nVidia is aiming for but untill that in 6months nvidia is going to lose some big bunch of money due HD5970 and HD5870 from ATI/AMD is beating every card they have with less price.
nVidia is going sooner or later drop GTX285 and GTX295 prices heavely in order to compete with ATI/AMD HD5970 and HD5870 in price/performance.
C++ itself is just a superset of C, mostly. Slower. Bloatier. Not suitable for hardware really - due to the nature of it. Basically the "C++" in Fermi will be "C with some C++ influences".
The only real advantage will be C++ libraries. People will still write very C-like code, because it's more efficient.
I'm waiting for GF500, should have Python and Ruby interepters built-in.
nvidia would be best off if they simply shut up. all their blabbering about fermi isn't getting them anywhere, it's just more and more bad pr. they should start talking once they get it in the hands of a legitimate reviewer, and then build hype. for now, the more they talk the more people buy ati.
Completely off topic, but +1
This right here is the essence of the problem of the internet; there is no accounatability. From false advertising to viral marketing to lack of ethical standards and reprocussions, the way the internet works is going to have to radically change or there is trouble ahead. IMO!
BTW, that's nothing against you Safan, just a missplaced observation that seemed to fit well with his post! :D
Now, back to your regularily scheduled program....:p:
nVidia isn't going to drop the price on 285/295, nVidia is going to drop them instead. Cards cost THIS much to manufacture, if they lower the price, they will lose money and that's something stupid to do when the replacements are around the corner.
Yep thats wht i think but non the less i have coded over 2 programs which exceed 200 LOC in C i hope it works in fermi.
The other thing i am not sure of is the speed, how much faster is it than a say 3Ghz c2d in compiling? I hope they bring all C++ libs in tough, programming fermi in C using C++ resources would be a lot of fun.
How have you not read the fermi white paper?
I agree about C++ not so much about dx11 being a douche i mean ya C is popular but DX11 supports directcompute API. I know it is slower than C but still its easier to write for than the lengthy C. The other bright star is openCL even that is slower than C but then again easier to write for.