MMM
Page 20 of 34 FirstFirst ... 101718192021222330 ... LastLast
Results 476 to 500 of 828

Thread: AMD Radeon HD6950/6970(Cayman) Reviews

  1. #476
    Xtreme Enthusiast
    Join Date
    Jun 2006
    Location
    Space
    Posts
    769
    Quote Originally Posted by RSC View Post
    If dual GPU solutions performed on the same level as single GPU solutions and didn't suffer from micro stuttering and an extreme need for optimized drivers, nobody would buy a high end single GPU card. If the performance stability and smoothness was the same, everybody would just buy two 5770 or two GTX460 SE and call it a day.
    Speak for yourself.

  2. #477
    Xtreme Addict
    Join Date
    May 2009
    Location
    Switzerland
    Posts
    1,972
    Quote Originally Posted by RSC View Post
    If dual GPU solutions performed on the same level as single GPU solutions and didn't suffer from micro stuttering and an extreme need for optimized drivers, nobody would buy a high end single GPU card. If the performance stability and smoothness was the same, everybody would just buy two 5770 or two GTX460 SE and call it a day.
    You invert the thing ... most peoples will not buy an high end card, just for the price, and so they don't want dual gpu cards or SLI/CFX cause the price, if they don't want put 500 dollars in a card, they don't want buy 2 at 300..

    Specially since some years ... middle class card has become more and more reliable for gaming even at high resolution. the mass ( and i speak about gamers ) will buy a 5770 - 6850-70 a GTX460 and deal with that.

    Most Peoples i know who are using CFX or SLI want just more perf of what a single high end gpu can offer...
    Last edited by Lanek; 12-17-2010 at 12:50 PM.
    CPU: - I7 4930K (EK Supremacy )
    GPU: - 2x AMD HD7970 flashed GHZ bios ( EK Acetal Nickel Waterblock H2o)
    Motherboard: Asus x79 Deluxe
    RAM: G-skill Ares C9 2133mhz 16GB
    Main Storage: Samsung 840EVO 500GB / 2x Crucial RealSSD C300 Raid0

  3. #478
    Xtreme Member
    Join Date
    Sep 2009
    Location
    Portugal
    Posts
    233
    Quote Originally Posted by Lanek View Post
    You invert the thing ... most peoples will not buy an high end card, just for the price, and so they don't want dual gpu cards or SLI/CFX cause the price, if they don't want put 500 dollars in a card, they don't want buy 2 at 300..

    Specially since some years ... middle class card has become more and more reliable for gaming even at high resolution. the mass ( and i speak about gamers ) will buy a 5770 - 6850-70 a GTX460 and deal with that.

    Most Peoples i know who are using CFX or SLI want just more perf of what a single high end gpu can offer...
    What has that to do with what I said..? I wasn't talking about why mainstream gamers buy X or Y card. I was talking about differences between a high end single GPU card and a dual GPU card and how comparing both is not "fair" because they don't offer the same type of performance, cause the 5970 suffers from the same issues of all dual GPU solutions.
    Last edited by RSC; 12-17-2010 at 03:05 PM. Reason: Typo.

  4. #479
    Xtreme Addict
    Join Date
    Mar 2005
    Location
    Rotterdam
    Posts
    1,553
    Quote Originally Posted by RSC View Post
    What has that to do with what I said..? I wasn't talking about why mainstream gamers buy X or Y card. I was talking about differences between a high end single GPU card and a dual GPU card and how comparing both is not "fair" because they don't offer the same type of performance, cause the 5970 suffers from the same issues as all dual GPU solutions.
    what you are doing is invalidating amds strategy to make dual efficient chips to match a giant chip from nvidia. Basically you mean that it is only fair to compare one chip with one chip no matter how they are designed. Sure the 5970 has some microstuttering, but most people dont care or dont notice it when purchasing the product. The most fair comparison is that between the same price range and which fits into one pcie slot. And thus 580 vs 5970 is a fair comparison between similar products. Just because one is big and powerful and the other is dual efficient doesnt change much of the user experience today, they are merely design choices.
    Last edited by Dimitriman; 12-17-2010 at 01:39 PM.
    Gigabyte Z77X-UD5H
    G-Skill Ripjaws X 16Gb - 2133Mhz
    Thermalright Ultra-120 eXtreme
    i7 2600k @ 4.4Ghz
    Sapphire 7970 OC 1.2Ghz
    Mushkin Chronos Deluxe 128Gb

  5. #480
    Xtreme Member
    Join Date
    Jan 2007
    Posts
    235
    SKYMTL thanks for clearing that up.
    ---
    ---
    "Generally speaking, CMOS power consumption is the result of charging and discharging gate capacitors. The charge required to fully charge the gate grows with the voltage; charge times frequency is current. Voltage times current is power. So, as you raise the voltage, the current consumption grows linearly, and the power consumption quadratically, at a fixed frequency. Once you reach the frequency limit of the chip without raising the voltage, further frequency increases are normally proportional to voltage. In other words, once you have to start raising the voltage, power consumption tends to rise with the cube of frequency."
    +++
    1st
    CPU - 2600K(4.4ghz)/Mobo - AsusEvo/RAM - 8GB1866mhz/Cooler - VX/Gfx - Radeon 6950/PSU - EnermaxModu87+700W
    +++
    2nd
    TRUltra-120Xtreme /// EnermaxModu82+(625w) /// abitIP35pro/// YorkfieldQ9650-->3906mhz(1.28V) /// 640AAKS & samsung F1 1T &samsung F1640gb&F1 RAID 1T /// 4gigs of RAM-->520mhz /// radeon 4850(700mhz)-->TRHR-03 GT
    ++++
    3rd
    Windsor4200(11x246-->2706mhz-->1.52v) : Zalman9500 : M2N32-SLI Deluxe : 2GB ddr2 SuperTalent-->451mhz : seagate 7200.10 320GB :7900GT(530/700) : Tagan530w

  6. #481
    Xtreme Member
    Join Date
    Sep 2009
    Location
    Portugal
    Posts
    233
    Quote Originally Posted by Dimitriman View Post
    what you are doing is invalidating amds strategy to make dual efficient chips to match a giant chip from nvidia. Basically you mean that it is only fair to compare one chip with one chip no matter how they are designed. Sure the 5970 has some microstuttering, but most people dont care or dont notice it when purchasing the product. The most fair comparison is that between the same price range and which fits into one pcie slot. And thus 580 vs 5970 is a fair comparison between similar products. Just because one is big and powerful and the other is dual efficient doesnt change much of the user experience today, they are merely design choices.
    Dual GPU have micro stuttering, single GPU doesn't. And that, my friend, is a big difference in gameplay experience, no matter how you paint it.

  7. #482
    Xtreme Enthusiast
    Join Date
    Jan 2008
    Posts
    743
    Seeing microstuttering enough to annoy you is like being allergic to seafood. Sucks to be you. Dual gpu scaling is amazing this round for both AMD/Nvidia. 100% scaling on Metro 2033 for AMD, and 97% scaling for Nvidia

  8. #483
    Xtreme Enthusiast
    Join Date
    Feb 2005
    Posts
    970
    Quote Originally Posted by RSC View Post
    Dual GPU have micro stuttering, single GPU doesn't. And that, my friend, is a big difference in gameplay experience, no matter how you paint it.
    I think you're confusion gameplay experience with propoganda. That's the only thing nv has to compete with the 5970.

  9. #484
    Xtreme Member
    Join Date
    Sep 2009
    Location
    Portugal
    Posts
    233
    Quote Originally Posted by kadozer View Post
    Seeing microstuttering enough to annoy you is like being allergic to seafood. Sucks to be you. Dual gpu scaling is amazing this round for both AMD/Nvidia. 100% scaling on Metro 2033 for AMD, and 97% scaling for Nvidia
    It is indeed amazing, but not flawless. All I was saying was that.



    Quote Originally Posted by flippin_waffles View Post
    I think you're confusion gameplay experience with propoganda. That's the only thing nv has to compete with the 5970.
    I'm not confusing anything mate. I would say the same thing if we were talking about the GTX295, for example.

  10. #485
    Xtreme Mentor
    Join Date
    May 2008
    Posts
    2,554
    Quote Originally Posted by kadozer View Post
    Seeing microstuttering enough to annoy you is like being allergic to seafood. Sucks to be you. Dual gpu scaling is amazing this round for both AMD/Nvidia. 100% scaling on Metro 2033 for AMD, and 97% scaling for Nvidia
    I wish that all games scaled like that but as we all know that isn't the case especilly once we start talking about games that aren't exactly the latest big thing. I still remember Morrowind with MGE chugging along at unacceptable framerates at settings that my GTX280 ran very well. Source ports like Darkplaces and eDuke32 didn't scale at all. These are just a couple of examples.
    Last edited by BababooeyHTJ; 12-17-2010 at 03:24 PM.

  11. #486
    Xtreme Addict
    Join Date
    Mar 2005
    Location
    Rotterdam
    Posts
    1,553
    Quote Originally Posted by RSC View Post
    Dual GPU have micro stuttering, single GPU doesn't. And that, my friend, is a big difference in gameplay experience, no matter how you paint it.
    Like I said, a few notice, most who notice don't care, just as most don't care that one gf110 chip in the 580 consumes as much power as two cypress chips in the 5970. I doesn't make them different classes of product.
    Gigabyte Z77X-UD5H
    G-Skill Ripjaws X 16Gb - 2133Mhz
    Thermalright Ultra-120 eXtreme
    i7 2600k @ 4.4Ghz
    Sapphire 7970 OC 1.2Ghz
    Mushkin Chronos Deluxe 128Gb

  12. #487
    Xtreme Enthusiast
    Join Date
    Feb 2009
    Location
    Lima, Peru
    Posts
    600
    Quote Originally Posted by AAbenson View Post
    many of you guys forget that the gfx cards coming out end of year 2010 were supposed to not be on 40nm,as originally designed by AMD...
    then TSMC said it has problems with the smaller process and AMD was able to redo partially the current release somehow.
    my guess is that they will be making good money both on the 68xx and 69xx.
    so in a way i guess,Barts and Cayman are plan B
    There's one thing:

    Volume, people can buy HD6900 and find them easily. GTX500 never had a real hard launch. And in europe, many stores don't have stock right now. It's december and christmas, for me it seems like a win for the HD6900.

    Nvidia with their current die size can't afford the same volume as cayman.



    tajoh111
    Then what can we say about GTX460 with his huge die size compared to barts selling for the same price. Barts cpu's has overprice and this days I see some kind of price-cuts. Both Sapphire/Gigabyte OC versions sells for the same price as their non OC ones.
    Athlon II X4 620 2.6Ghz @1.1125v | Foxconn A7DA-S (790GX) | 2x2GB OCZ Platinum DDR2 1066
    | Gigabyte HD4770 | Seagate 7200.12 3x1TB | Samsung F4 HD204UI 2x2TB | LG H10N | OCZ StealthXStream 500w| Coolermaster Hyper 212+ | Compaq MV740 17"

    Stock HSF: 18°C idle / 37°C load (15°C ambient)
    Hyper 212+: 16°C idle / 29°C load (15°C ambient)

    Why AMD Radeon rumors/leaks "are not always accurate"
    Reality check

  13. #488
    Xtreme Guru
    Join Date
    Jun 2010
    Location
    In the Land down -under-
    Posts
    4,452
    /close thread lol

    Another thing I find funny is AMD/Intel would snipe any of our Moms on a grocery run if it meant good quarterly results, and you are forever whining about what feser did?

  14. #489
    Xtreme Addict
    Join Date
    Mar 2006
    Location
    Saskatchewan, Canada
    Posts
    2,207
    Quote Originally Posted by Nintendork View Post
    There's one thing:

    Volume, people can buy HD6900 and find them easily. GTX500 never had a real hard launch. And in europe, many stores don't have stock right now. It's december and christmas, for me it seems like a win for the HD6900.

    Nvidia with their current die size can't afford the same volume as cayman.



    tajoh111
    Then what can we say about GTX460 with his huge die size compared to barts selling for the same price. Barts cpu's has overprice and this days I see some kind of price-cuts. Both Sapphire/Gigabyte OC versions sells for the same price as their non OC ones.
    Same with them too. NV must crazy hate to sell such a big chip at that type of price. The deals for the gtx 460 are ridiculous nowadays. Atleast NV had a few solid months of sales at its full price so everything is not completely bust.

    And the gtx 560 just turns out to be gf104 but everything enabled, then things might just turn around for them.
    Core i7 920@ 4.66ghz(H2O)
    6gb OCZ platinum
    4870x2 + 4890 in Trifire
    2*640 WD Blacks
    750GB Seagate.

  15. #490
    Xtreme Enthusiast
    Join Date
    Mar 2009
    Location
    Toronto ON
    Posts
    566
    Quote Originally Posted by SKYMTL View Post
    Contrary to popular belief, TSMC didn't have issues with 32nm. It was dropped for economic reasons after AMD decided to transfer Cozumel and Kauai to 40nm. There were still plans to do Ibiza at 32nm (likely where the 1920 SP rumor came from) but those fell through when TSMC no longer saw a point in pushing a manufacturing process which very few companies would pick up on.


    AMD's lineup would have looked like this on 32nm:

    Ibiza
    Cozumel
    Kauai


    Instead we are getting 4 products:

    Cayman
    Barts
    Turks
    Caicos


    Basically, they are now able to better cover the market with cards using a more mature process while costs are kept to a minimum. Suits me fine.
    Any links about that, I can't Google any but there are plenty reports about TSMC 32nm cancellation. Personally I believe TSMC cancelled the node because of so much problem with 40nm and GLobalFoundries announcing to work on 28nm.

    The bottom line is, if GLobalFoundries got successful 28nm process against the TSMC 32nm than TSMC could have even loose the Nvidia business. They sure could not afford that.

    Here is what AMD Vice President and General Manager of AMD's GPU Division said according to bit-tech
    Mr. Skynner admitted that the HD 6000 series was originally set to use TSMC's 32nm process, but that AMD had to opt back to 40nm earlier this year after that process was unceremoniously dumped by TSMC in favour of concentrating on 28nm only.
    I don't think Skynner lied about it, after all they still need TSMC

    EDIT
    If TSMC cancelled the 32nm because of AMD, why wouldn't the TSMC say so? Or did they? If they did that sure is a big new news.
    Last edited by Heinz68; 12-17-2010 at 05:11 PM.
    Core i7-4930K LGA 2011 Six-Core - Cooler Master Seidon 120XL ? Push-Pull Liquid Water
    ASUS Rampage IV Black Edition LGA2011 - G.SKILL Trident X Series 32GB (4 x 8GB) DDR3 1866
    Sapphire R9 290X 4GB TRI-X OC in CrossFire - ATI TV Wonder 650 PCIe
    Intel X25-M 160GB G2 SSD - WD Black 2TB 7200 RPM 64MB Cache SATA 6
    Corsair HX1000W PSU - Pioner Blu-ray Burner 6X BD-R
    Westinghouse LVM-37w3, 37inch 1080p - Windows 7 64-bit Pro
    Sennheiser RS 180 - Cooler Master Cosmos S Case

  16. #491
    Xtreme Addict
    Join Date
    Jul 2007
    Posts
    1,488
    Quote Originally Posted by SKYMTL View Post
    Contrary to popular belief, TSMC didn't have issues with 32nm. It was dropped for economic reasons after AMD decided to transfer Cozumel and Kauai to 40nm. There were still plans to do Ibiza at 32nm (likely where the 1920 SP rumor came from) but those fell through when TSMC no longer saw a point in pushing a manufacturing process which very few companies would pick up on. [snip]
    Like Heinz68, I am curious about this. Is this something AMD told you?

    In your 6970 review you said AMD had taped out some of the new architecture products before deciding against using 32nm for all of them. So they had some products for this arch taped out before ~Nov'09? That seems like a really long time.

  17. #492
    Xtreme Member
    Join Date
    Aug 2006
    Posts
    215
    Quote Originally Posted by Dimitriman View Post
    what you are doing is invalidating amds strategy to make dual efficient chips to match a giant chip from nvidia. Basically you mean that it is only fair to compare one chip with one chip no matter how they are designed. Sure the 5970 has some microstuttering, but most people dont care or dont notice it when purchasing the product. The most fair comparison is that between the same price range and which fits into one pcie slot. And thus 580 vs 5970 is a fair comparison between similar products. Just because one is big and powerful and the other is dual efficient doesnt change much of the user experience today, they are merely design choices.
    No my friend, you are wrong. It is most definitely not the same.

    If is actually was the same we'd be seeing 4x5770s on a card. Why don't we see it, because it doesn't work like that.

  18. #493
    Xtreme Addict
    Join Date
    Jul 2007
    Posts
    1,488
    Quote Originally Posted by tdream View Post
    No my friend, you are wrong. It is most definitely not the same.

    If is actually was the same we'd be seeing 4x5770s on a card. Why don't we see it, because it doesn't work like that.
    There are lots of people that get multiple midrange boards and SLI/CF them to match or beat the performance of larger single chip cards. Companies aren't offering (many) cards with multiple midrange chips because the extra cost of board components needed for CF/SLI offsets the savings from smaller chips.

  19. #494
    Xtreme Enthusiast
    Join Date
    Mar 2009
    Location
    Toronto ON
    Posts
    566
    Quote Originally Posted by tdream View Post
    No my friend, you are wrong. It is most definitely not the same.

    If is actually was the same we'd be seeing 4x5770s on a card. Why don't we see it, because it doesn't work like that.
    WOW 4 GPU on one card what a bright new idea. The first GPU would say hi, the problem is the last GPU would not be able to close the door.

    Plus if some people believe there is so much problems with 2 GPU, four would not make it any better. Most time there is very good scaling with 2 GPU, not sot so much with third one and even less with forth one, if any.
    Core i7-4930K LGA 2011 Six-Core - Cooler Master Seidon 120XL ? Push-Pull Liquid Water
    ASUS Rampage IV Black Edition LGA2011 - G.SKILL Trident X Series 32GB (4 x 8GB) DDR3 1866
    Sapphire R9 290X 4GB TRI-X OC in CrossFire - ATI TV Wonder 650 PCIe
    Intel X25-M 160GB G2 SSD - WD Black 2TB 7200 RPM 64MB Cache SATA 6
    Corsair HX1000W PSU - Pioner Blu-ray Burner 6X BD-R
    Westinghouse LVM-37w3, 37inch 1080p - Windows 7 64-bit Pro
    Sennheiser RS 180 - Cooler Master Cosmos S Case

  20. #495
    Xtreme Guru
    Join Date
    Aug 2007
    Posts
    3,562
    Quote Originally Posted by Heinz68 View Post
    Any links about that, I can't Google any but there are plenty reports about TSMC 32nm cancellation. Personally I believe TSMC cancelled the node because of so much problem with 40nm and GLobalFoundries announcing to work on 28nm.

    The bottom line is, if GLobalFoundries got successful 28nm process against the TSMC 32nm than TSMC could have even loose the Nvidia business. They sure could not afford that.

    Here is what AMD Vice President and General Manager of AMD's GPU Division said according to bit-tech

    I don't think Skynner lied about it, after all they still need TSMC

    EDIT
    If TSMC cancelled the 32nm because of AMD, why wouldn't the TSMC say so? Or did they? If they did that sure is a big new news.
    No one outright lies in this industry but PR is all about selective truth telling...and of course a fair amount of embellishment by certain publications in order to give a certain voice to articles.

    TSMC cancelled their 32nm process. Why should anyone need to know more? Even the shareholders usually get a warmed-over version. There are so many stories within stories that the real truth is hardly ever so simple.

    I am not saying that AMD's dropping of their lower-end 32nm cards was the end-all for 32nm but rather one of the main contributing factors to TSMC's re-evaluation of their roadmap.

    In the past, it has been ATI's cards that have very much been route proving products for TSMC's High Performance lines. We saw this with 40nm, 55nm, etc. The manufacturing relationship between ATI (now AMD) and TSMC allowed for a mutually beneficial roll-out procedure that ended up benefiting clients like NVIDIA as well.

    So yeah, there were probably other economic factors behind TSMC's shutting down 32nm fabrication before it even started producing anything past test wafers. However, loosing high volume parts from a major client likely had a massive impact.

    Quote Originally Posted by Solus Corvus View Post
    Like Heinz68, I am curious about this. Is this something AMD told you?

    In your 6970 review you said AMD had taped out some of the new architecture products before deciding against using 32nm for all of them. So they had some products for this arch taped out before ~Nov'09? That seems like a really long time.

    Regardless of what certain outlets state, an initial tape-out usually happens 9-12 months (or even more) before volume production. And yes, I can state that my conversations with AMD covered the points above and then some. Some I can discuss, most I can't.

  21. #496
    Xtreme Enthusiast
    Join Date
    Jan 2008
    Location
    Chicago, IL USA
    Posts
    582
    Just installed a HD6970.

    Here are the 3DMark06 results.


  22. #497
    Xtreme Addict
    Join Date
    Jul 2007
    Posts
    1,488
    Quote Originally Posted by SKYMTL View Post
    Regardless of what certain outlets state, an initial tape-out usually happens 9-12 months (or even more) before volume production. And yes, I can state that my conversations with AMD covered the points above and then some. Some I can discuss, most I can't.
    It would fall in the even more category if it was November or earlier, that's why I was wondering. Not unheard of, but still on the long side of things. Maybe there were more issues then just the process?

  23. #498
    c[_]
    Join Date
    Nov 2002
    Location
    Alberta, Canada
    Posts
    18,728
    Amazing how many people dont realize it is possible to compare apples to oranges. What matters is the end user's preference, not your own.

    All along the watchtower the watchmen watch the eternal return.

  24. #499
    Xtreme Mentor
    Join Date
    Feb 2008
    Posts
    2,671
    Im not impressed with these. £220 for the cheapest 6950, £280 for the cheapest 6970 with those rubbish reference coolers (high temps, too much noise), or £155 for the MSI talon attack GTX 460 hawk edition with low temps and noise, and great overclock potential.

    The GTX 560 looks like it will have the 6950 beat by a large margin.

  25. #500
    Xtreme Member
    Join Date
    Nov 2007
    Location
    France
    Posts
    107
    Quote Originally Posted by bhavv View Post
    Im not impressed with these. £220 for the cheapest 6950, £280 for the cheapest 6970 with those rubbish reference coolers (high temps, too much noise), or £155 for the MSI talon attack GTX 460 hawk edition with low temps and noise, and great overclock potential.

    The GTX 560 looks like it will have the 6950 beat by a large margin.
    Perhaps but it will be designed with only 1 gb of memory .

Page 20 of 34 FirstFirst ... 101718192021222330 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •