If thats true, the GTX 580 is in big trouble! And i will pick one up day 1
Printable View
If thats true, the GTX 580 is in big trouble! And i will pick one up day 1
I added link to the source. check that post again.
If that's true.... well GTX580 will only have a couple of weeks as the tessellation king (and single gpu king)... fail king
lol
Really hoping it will at least perform equal to the GTX580 (a loss of 5, maybe 10% aint that bad, beating it would be great). Not because I wouldn't want a nVidia card, but because I hate dual card setups (using Eyefinity at the moment), 2 GTX580s cost a 1000 euros & will make a pc consume about 650 Watts during gaming on the graphics side alone (2x 250 Watt & 3x 50 Watt for the monitors).
I'd rather have 5 to 10% less performance (and a cheaper card) if I can avoid the required SLI for nVidia's solution and only use about 400 Watts at the graphic section.
And that is why I am hoping that it performs at max 10% less than a GTX580 and hopefully better, to avoid having to Crossfire (to get enough FPS)/SLI (cause 1 GTX580 can't run 3 monitors)?...
*insert random icon to make post look more epic* :shakes:
dude you missing the idea here. the HD5870 was a die shrink and spec wise it almost doubled the HD4870. 1600shadders compared to 800shadders, most people are going to think that means double the performance, it also had double the rops and a higher clock speed. so on paper it was twice as fast however when it came down to it the HD5870 was rarely close to being twice as fast because of the poor shadder organization and layout. it also used more power, was larger and gave off the same amount of heat compared to the 4870. and took more than a year to come out. however the HD5870 was (and still is) a great card that caused ATI to take back the fastest single GPU crown for a good while. i can't really remember much complaining about it... even by nvidia fanboys considering how the performance was good and how it brought DX11 and eyefinity to the table.
why are people impressed by the GTX 580? because it's out like 7 months after the GTX 480, it uses less power, puts out less heat, is smaller, is about the same price (in Canada), and is still 20% faster without a die shrink... and it OC's pretty good as well.
on to Cayman, these leaked specs seem to confirm the rumors 1920shadders, 4D layout, 2GB of ram blah blah blah. come on AMD leak something good like a bench or something:D this card will be powerful but it just does not add up to have it beating the 5970. yes the shadders are better used and there is less wasted power, but a 1280SP deficit is not easy to overcome. I think if anything 6970 will end up neck and neck with the GTX 580, leading to fanboy violence from both camps, a brutal price war and smiles on all the kids faces at Christmas.
massive problem here, I am decades past being a kid, so how am I going to justify an upgrade by Christmas???
No, nVidia fans were all over how GF100 will beat 5870 to obvilion.
Thats not impressive, 10% that is clocks, that leaves ~5-15% depending on situation for full CUDA core count and improvements. I really dont see anything impressive. And 7 months.. Do you really think that in time when nVidia was trying its ass to fix GF100 they did not come with lot of ideas, but those ideas could have not been implemented in time. So they did what they could, bubblegum to get GF100 released and then work on other improvements that take acceptable time to do to try to catch AMDs lead.Quote:
why are people impressed by the GTX 580? because it's out like 7 months after the GTX 480, it uses less power, puts out less heat, is smaller, is about the same price (in Canada), and is still 20% faster without a die shrink... and it OC's pretty good as well.
Well, performance leaves to be seen. I have no trouble adding up to 5970 performance, as you yourself said, Evergreen is very inefficient design. What do you think that Cayman is all about? its finding the right mix to get it efficient at high shader counts. I bet Barts is what it is becouse that is Evergreen designs "sweet spot".Quote:
on to Cayman, these leaked specs seem to confirm the rumors 1920shadders, 4D layout, 2GB of ram blah blah blah. come on AMD leak something good like a bench or something:D this card will be powerful but it just does not add up to have it beating the 5970. yes the shadders are better used and there is less wasted power, but a 1280SP deficit is not easy to overcome. I think if anything 6970 will end up neck and neck with the GTX 580, leading to fanboy violence from both camps, a brutal price war and smiles on all the kids faces at Christmas.
Quote:
Partners not getting their AMD Radeon HD 6970 boards in time has led many to speculate AMD is indeed facing yield issues with its upcoming flagship GPU. However, an insider has shared a contradicting piece of news: the card's delay is not due to yield issues - production is perfectly fine, but rather a shortage of a particular component from Texas Instruments (TI) is the root cause of the hold-up.This TI component is an integrated driver-MOSFET (DrMOS) that was first used on AMD's Radeon HD 6800 Series. This DrMOS is so new to the point there is no information on it on the Web, not even from the manufacturer itself.
Supply of this DrMOS is limited, and since the Radeon HD 6800 Series and upcoming HD 6900 Series share the same VRM design, any (tight) supply from TI is shared between all the cards. This leads to a delay in HD 6970 card manufacturing, with partners receiving their final boards late as well.
It is interesting to note that AMD has also withheld the final BIOS from partners. It is an open question as to whether NVIDIA's just-launched GeForce GTX 580 flagship GPU and its performance figures has anything to do with this. That aside, the initial 22nd November date may be set to change; AIBs will only know the final launch date from AMD at the end of this week.
http://vr-zone.com/articles/amd-rade...med/10276.html
The chiphell admin basically created a very gutsy thread titled
"No more 6970 performance predictions allowed!", and left this nugget of insight:
"Yeah yeah it's faster than the 580 alright, no speculation on how much faster!"
If I were one of those pathetic fanboys pretending to care about consumers I'd be oooooh disappointed now.
LMAO, what a way to pave the road ahead, hopefully silly fanboisms will dilute from here on.
Yeah it would be wise if the 6970 had 1920 shaders instead of 1536 like previous rumours stated, why would the 6970 have less shaders than the 5870? Didn't make too much sense.
"Yeah yeah it's faster than the 580 alright, no speculation on how much faster!"
that's all I care about right about now