-
Quote:
Originally Posted by
Sh1tyMcGee
I had a 5950Ultra :shakes:
Ati's version is the 2900XT, i think the 2900 is the worst video card of all time, the fact the ATI was so desperate and released it knowing it was garbage is even worse.
At least the 2900 XT was priced competitively (8800 GTS 640MB). Nvidia's NV30 generation were all priced as if they were performance leaders.
But yes, R600 was a huge flop when you consider what it could've done and the hype prior to launch (320SP's, 512bit external ring bus, 1GB GDDR4 and etc).
Quote:
if theres one thing I've learned about the ATI only crowd, its hope springs eternal. Always hoping to have a product that smashes nvidia the way the 9700pro did and then dancing like little school girls just because it competes.
Yes because R300, R400, R500 and RV7x0 were all uncompetitive--oh wait...
-
So, when the 5870 is going to be demonstrated in public?
Quote:
Originally Posted by
Wesker
At least the 2900 XT was priced competitively (8800 GTS 640MB).
If the only purpose of having 2900XT was running 3D Mark, than it was priced competitively. Seriously, Radeon 2000 were garbage.
Quote:
But yes, R600 was a huge flop when you consider what it could've done and the hype prior to launch (320SP's, 512bit external ring bus, 1GB GDDR4 and etc).
It was a flop, not only because of the hype, but also because it arrived so late.
-
Quote:
Originally Posted by
gOJDO
So, when the 5870 is going to be demonstrated in public?
If the only purpose of having 2900XT was running 3D Mark, than it was priced competitively. Seriously, Radeon 2000 were garbage.
It was a flop, not only because of the hype, but also because it arrived so late.
IIRC, it was very competitive with the 8800 GTS (>=) without MSAA. Once you enabled MSAA, however, it got a little murky (<=). This wasn't helped by the terrible drivers at launch time (Again, IIRC, there were times when R580+ managed to outperform R600). I believe several key ATI employees came out saying that they wished drivers were in a much better shape for launch (one of which was Sir Eric Demers, a lead chip architect at ATI).
Massively oversized, a very leaky 80nm process, no hardware based MSAA resolve, premature drivers and a shortage of texture throughput was R600's undoing, IMO.
But the $399 price tag, included Black Box bundle and native sound over HDMI interface kept it from becoming a complete disaster.
EDIT: Changed Orange Box to Black Box.
-
Come on already....where are the leaks???
I think the can of wuppass will come in at 2000 sps. haha, ok, well, maybe just hoping. ;)
-
London, USA, Asia AMD Events
-
Just to remind everyone know that the picture is fake.
-
Wish someone in China would leak some info about the product already, anyone check chiphell lately for any new info?????
-
Quote:
Originally Posted by
MTP04
Its says P vantage.:p:
Where are you guys getting 3D06 from?
Because he brought it up.
I was trying to make the point that it obviously was Vantage due to the scores.
Quote:
Originally Posted by
Wesker
Massively oversized, a very leaky 80nm process, no hardware based MSAA resolve, premature drivers and a shortage of texture throughput was R600's undoing, IMO.
But the $399 price tag, included Orange Box bundle and native sound over HDMI interface kept it from becoming a complete disaster.
Umm... wasn't it "Black Box?"
R600's TMUs were amazing there just wasn't enough of them nor clocked high enough, like you said.
-
-
Quote:
Originally Posted by
570091D
there are many large differences between ati and nvidia, first and formost being their style of building gpus!
there are advantages and disadvantages for owning a card from each brand, it all depends on what you use your rig for and what games you play. these arguements stating "zomg ati pwns nv again!!11!!1!!11!" or visa versa are silly, it's up to each one of us to look at the data and decide where to spend our hard-earned greenbacks. we don't know the specs for either camp, we don't know the performance, we don't know the cost. i'm down for some tin-foil-hat speculation, but these statements about there being ONE AND ONLY ONE good gfx company are useless.
My sentiments exactly, I just do not see what brings on the hate for one brand or another. I will always buy what is best for my needs and interest, I only like sngle gpu more for simplicities sake. (simple for us even if not for Nvidia to make it)
Quote:
Originally Posted by
eric66
lol nvidia bot how about x1900 series top notch performance with superior iq and 'hdr and aa at the same time' lol did it remind u smthng go and bs elsewhere except r600 series there wasn't that much difference between ati and nvidia since 9000 series
Thanks for proving my argument, atleast indirectly.Also from 2002-2005 I only bought ATI card for my main rig. Also guess what card I owned most of 2006, an X1900XT!
The problem was that card was a real late comer and the X1800XT was not what was needed to dethrone or even shake the ages old grip the 7800GTX had at that point. Allot of people considered a pair of 7900s better, glad for Oblivions sake I didnt. I just disdane how ATI has NOT unarguably wrangled back the crown since the R580 days, a card that itself was a pale victory next to the 9700pro. The only thing that really gets me excited for the HD5870 is value, not to awe in its performance.
-
Fear the AMD ninjas, fear them.
-
Quote:
Originally Posted by
rekleif
Supposedly one of the midrange cards, aka Juniper.
-
You know, everybody has been thinking 256bit memory interface due to the appearence of power of 2 amounts of memory chips, but we have only seen one side of this pcb (and the opposite side of a different pcb or at least the heatink imprint.) but what if the other side had, say, 8 memory chips, for a total of 12? Probably pretty far fetched, but it's possible, and the die size estimates line up to a point where odd memory configurations could make sense...
-
Quote:
Originally Posted by
Sly Fox
Let's just all agree that the 5800 Ultra was the most obnoxious card ever and move on with our lives. :rofl: :up:
I bought a 5900 Ultra on launch day in japan for 650$
Probably not a great idea!
I bought a 8800 GTX on launch day on newegg for 650$
Best idea I ever had... Kept it for over a year and a half....
-
I think its time the Graphics market moves to 512bit for high end parts, 256 is so 2005.
-
Some cards in the nvidia 5 series were good also 5700 LE being one of them i bought one quite cheap and OCed the hell out of it. Upgraded to a x850 and then a 6800 GT "x850 did not run new games SM issue"
-
Quote:
Originally Posted by
hurleybird
You know, everybody has been thinking 256bit memory interface due to the appearence of power of 2 amounts of memory chips, but we have only seen one side of this pcb (and the opposite side of a different pcb or at least the heatink imprint.) but what if the other side had, say, 8 memory chips, for a total of 12? Probably pretty far fetched, but it's possible, and the die size estimates line up to a point where odd memory configurations could make sense...
Except 4 ICs were supposedly confirmed, quite awhile ago, under the HS.
-
Quote:
Originally Posted by
jaredpace
21K on Vantage (P) sounds about the same as a Geforce 295. Very impressive.
-
Quote:
Originally Posted by
hurleybird
You know, everybody has been thinking 256bit memory interface due to the appearence of power of 2 amounts of memory chips, but we have only seen one side of this pcb (and the opposite side of a different pcb or at least the heatink imprint.) but what if the other side had, say, 8 memory chips, for a total of 12? Probably pretty far fetched, but it's possible, and the die size estimates line up to a point where odd memory configurations could make sense...
256bit?
Nah. I'm pretty sure Juniper has 128bit.
-
Im looking forward to 200K 3D03 :) also 150K single card
Neither of these numbers will be especially challenging.
-
Godamnit! Why can't they just bring it on already. Can't take it anymore :D
@hurleybird: There where rumours about a 384-bit interface but afaik they where proven wrong?! :shrug:
Quote:
Originally Posted by
JohnZS
Speaking as an nVidia fanboy [...]
Speaking about yourself as an nVidia fanboy, I would be really glad if every fanboy would put so much thought into his/her posts :up:
PhysX is really a pity for AMD/ATI, hopefully this will change soon when Havok works on OpenCL :) I personally prefer OpenCL over CUDA, as it is not bound to one company only.
-
Quote:
Originally Posted by
FischOderAal
Speaking about yourself as an nVidia fanboy, I would be really glad if every fanboy would put so much thought into his/her posts :up:
PhysX is really a pity for AMD/ATI, hopefully this will change soon when Havok works on OpenCL :) I personally prefer OpenCL over CUDA, as it is not bound to one company only.
I really do hope AMD releases Havok sometime with the HD5000 series... It will totally slaugther Nvidias closed propriety PhysX
-
-
Quote:
Originally Posted by
Smartidiot89
I really do hope AMD releases Havok sometime with the HD5000 series... It will totally slaugther Nvidias closed propriety PhysX
AMD can't release Havok because Havok is an independent company, bought by Intel a little time ago.
On the other hand, PhysX is a so closed propietary library as Havok is, probably even more open. Both are closed source, but both are free to license (except Havok, the basic package is royalty-free, including rigid bodies physics and character control / inverse kinematics, but not other modules as soft bodies or destructible objects) and free to download (SDK).
If you're talking about PhysX providing an extra feature that is GPGPU functionality through CUDA, that doesn't make it more closed than others. Even less when those others don't provide a equivalent feature (yet). When they do, PhysX simply will be less compatible, not less open.
Don't take me wrong, I'm all for OpenCL and open standards, and against the use of hw vendor dependant APIs (CUDA, EAX...). But let's call things by its name.
-
Just read 2160 tflops here:http://translate.google.com/translat...istory_state0=
2160 Tflops would be 675mhz core. I hope this is the 5850.