I'm not really sure why anyone would think IB would be any different (in terms of ipc) than SNB anyways, Intel themselves said it would be a simple die shrink with a few modifications here and there
I'm not really sure why anyone would think IB would be any different (in terms of ipc) than SNB anyways, Intel themselves said it would be a simple die shrink with a few modifications here and there
What are the chances really that the first leaked specs for an AMD GPU are legit?
Sure there will be minor boosts, but nothing worth waiting an entire year for. A lot of people were mixing up Ivy Bridge and Sandy Bridge - E too, they thought that Socket 2011 would have both within weeks of each other. If you wanted a rig in January 2011 but decided to wait for Ivy Bridge hoping for a 22nm Hex or Octo core in November, you're going to have to find another way to stay warm this Xmas.
You just have to look at the cayman rumors to see that people more than give AMD the benefit of the doubt even when common sense such as not moving to a smaller fabrication process. The hype train was ridiculous and people were expecting 5970 beating performance from Cayman.
You will see so many gold quotes in that thread.
http://www.xtremesystems.org/forums/...%28or-rumor%29
It also matters how you measure, using higher sampling allows the 6970 to surpass the 580.
Then the 6970 is the better card.
OC the 6950 or unlocked it to a 6970 the same happens.
Then again, do the gamer even notice the difference in fps?
Set up games with two machines Nvidia/AMD and let them try them and find out if they are able to spot the difference between the machines.
Unless the user are able to note the difference then there is no difference between the cards.
The same applies for amd/intel and their cpu.
Doing it that way is the only way the benefit is practical as its measureable.
However game tech sites and such sites would then be out of a job if someone started to do such tests.
Higher sampling what do you mean? Pretty much every review site out there has the gtx 580 faster. The only game the 6970 appears to be faster is that f1 racing game really. People were expecting the card to be faster than the 5970 and it wasn't even close to this speed. By and large, every single review out on the internet has the gtx 580 faster than 6970. It doesn't hurt that the gtx 580 is much more overclockable than the 6970. Alot of reviews show that even overclocked, the gtx 580 is still faster at stock clocks.
Your kind of talking ass and over generalizing with everything else. You think if a company started, implementing your testing method, all the game and tech sites would fold. That's ridiculous. HardOCP sort of does this testing with what are the maximum visual setting a card can use compared to the competition and I doubt they are personally responsible for putting tech websites out of business.
While I agree that there are instances where people notice the difference because old games will play at 150fps, most review don't test that many games where this is the case, or atleast include in their bench marks many games where there can be huge usability and visual differences.
There can be huge playability differences in these cards and skyrim is a good example.
http://www.hardocp.com/article/2011/...ance_iq_review
Especially when buying highend, you run into enthusiast that care more about settings and framerates and visual fidelity. Although some just have to much money, there are a lot of people that pay the extra money because they feel there money is justifiably being spent for better performance.
Agreed. I have no data but it has always been a few % improvements due to tweaks/cache and then MegaHurts improvements due to process shrink. I think everyone is hoping IB will be the SB on steroids. I'd say SB is sufficient though.. 5 GHz on air practically guaranteed is not bad.
AMD has built up that reputation with their GPUs though.. it has always been a massive jump by changing processes. NVIDIA is the same way.. G80/G92 -> GT200/b -> GF100. I believe everyone, especially after the "disappointment that was the HD6K series", is expecting a massive jump with the 45nm to 20nm jump. I know I sure am :D
expecting good things too, but not crazy ballistic things. Too me for some reason, I imagine at the very most 6990 performance which would be super fantastic. However realistically, I am expecting 20-35 percent below that. Moving to a new process helps, but from what I have been told, you can't just double the part because power consumption will still be though the roof, because these things don't cut power in half although it allows you to put nearly twice as much stuff on the die. I know gcn is a new core but from what I hear, gcn has alot to do with increasing AMD ability to compete in the professional market and to me, that means GPU compute performance expense at the expense of pure gaming performance. This means also more power consumption too possibly.
I think the target for 7970 was 1.4x of 6970 but I was told that a while back so i cant really verify that. If that is the target and they do meet the target though, we can take 6990 as ~1.85x of 6970, which leaves us with 1.4/1.85 = ~0.76x of 6990. Of course these are very rough estimates and calculations but thats the performance I expect.
Although I did join the hype wagon, my comments were based on the assumption that Cayman would have 1920 shaders rather than the current amount, which turned out not to be true.
Anyhow, I'm not making the same mistake again.. But I still expect the jump to 28nm to be a better one than the previous launch which stayed within the same node.
Still no word when the 78XX are coming? Other than "December"
Something to remember is that Cayman is probably the biggest die we've seen out of AMD/ATi since the R600. I doubt they're going to shoot for a huge die once again, and are going to obviously aim to have this chip be back to around the size of the 5870 rather than what we see now.
If I had to guess (and we all know I'm not a fan of guessing with GPU architectures, I prefer to see it with my own eyes) I'd estimate that we'll see around the performance we expected out of cayman in this chip, but NOT 60%+ higher than cayman. Unless the shaders are really that much more efficient (and most signs are pointing towards them aiming at better computation performance), we'll probably see around 30% out of this one tops. Depending on the tessellation performance of this chip, there may be places where that jump is much larger since AMD's current bottleneck IS their tessellation performance.
That said, if AMD believes they have the jump on their competitor in terms of being ready for production, they'll release a weaker unit now and sit on the better chip until NVidia show their face, as they'll be able to maximize profits off the smaller die and maybe move on to something with more kick when the time is needed.
One thing people keep forgetting though is that AMD doesn't aim for the top-end on their single chips. Don't expect a high end single gpu that's going to beat what NVidia throw out the gate unless NVidia screws up, which I think we all know they aren't going to make any repeat mistakes.
That all said, I really hope AMD have a winner on their hands, as that'll force down prices no matter which brand you prefer. Win win for everyone!
The 1920 cayman, even if it came true. The difference would not have been as big as many people may think. I think Xbitlabs did a comparison between performance of the 6950 and 6970 when both were clocked the same and the difference was 2% literally(the difference between the 5870 and 5850 were similar when both had the same clocks). The reason barts was so fast for the size and shader amount is that the chip had basically reached the maximum return on investment in regards to shaders. Adding more shaders after this point added far less gains, which is why the difference the barts to cayman difference was not as big as it should have been. Cayman with 224 5d shaders vs the 384 4d shaders. At this point, I think the current architecture from AMD is unbalanced and has too much shader power. There are some games where the architecture shines, but even in this case it usually just matches or just slightly beats fermi. It is the reason AMD has to switch architectures, because VLIW4 and VLIW5 has run out of legs. But man what a ride and it had freakish longevity.
good point SkYMTL i agree, i wasn't trying to start anything....;)
will the whole line be introduced in London on Dec 5th?
cant wait! for some solid data!
Today is the 5th Dec. Maybe we will learn some details.
I'm just pleased that finally ATI is releasing a card they know need more bandwidth than the GDDR5 256bit bus can provide, after the debacle of the HD2xxxx series you know they wouldn't dare unless they know it needs it. Either way I hope this thing will at least give my single 580GTX a beating.
Isnt it weird that GPU says "AMD Radeon"? Shouldnt that be model and/or marketing name? I'll call this a fake.