Results 1 to 15 of 15

Thread: GCN vs. VLIW5 performance improvements

  1. #1
    Xtreme Enthusiast
    Join Date
    May 2006
    Location
    over the rainbow
    Posts
    964

    Thumbs up GCN vs. VLIW5 performance improvements

    ht4u.net made a pretty awesome comparison.

    They took a HD 5770 and a new HD 7770. Both cards have 40 TMUs, 16 ROPs with a 128bit SI and GDDR5.
    And both have 10 shader-cluster.

    The only difference is, that while the HD 5770 has 160 5D shaders, the HD 7770 uses 640 1D GCN shaders.

    Now they clocked the HD 7770 down to the same clocks as the HD 5770 and compared the GPUs.

    HD 5770: 1360 GFLOP/s, 34,0 GTex/s, 13,6 GPix/s, 76,8 GB/s
    HD 7770: 1088 GFLOP/s, 34,0 GTex/s, 13,6 GPix/s, 76,8 GB/s

    You can see that the HD 7770 has about 25% less FLOPs, the rest of the cards are almost identical (and keep in mind that the HD 7770 has the better AF).

    The results are great: The HD 7700 is up to 37% faster in some games while losing to the HD 5770 only in two games, Dirt 3 & DA2. And there it's a maximum 1.8% slower.
    + drivers for the VLIW5 arch are matured, GCN is quiet new yet, we'll see some improvements here for sure.

    So, despite having about 25% less raw power, the HD 7770 is faster in most games, sometimes even by far. This seems like the step AMD needed for a long time.


    LINK: http://ht4u.net/reviews/2012/amd_rad...est/index4.php

    (This isn't supposed to be about the price of the HD 7770)
    AMD Phenom II X6 1055T@3.5GHz@Scythe Mugen 2 <-> ASRock 970 Extreme4 <-> 8GB DDR3-1333 <-> Sapphire HD7870@1100/1300 <-> Samsung F3 <-> Win8.1 x64 <-> Acer Slim Line S243HL <-> BQT E9-CM 480W

  2. #2
    Registered User
    Join Date
    Jun 2009
    Posts
    52
    Don't forget HD7770 with 640 GCN and everything build around it made the chip using 50% more transistors. From 1billion transistors to 1,5billion..
    Effectively.. the amount of active transistors would pretty much be the same as Barts Pro and not comparable to juniper at all.

  3. #3
    Xtreme Mentor
    Join Date
    Feb 2007
    Location
    Oxford, England
    Posts
    3,433
    Quote Originally Posted by bladerash View Post
    Don't forget HD7770 with 640 GCN and everything build around it made the chip using 50% more transistors. From 1billion transistors to 1,5billion..
    Effectively.. the amount of active transistors would pretty much be the same as Barts Pro and not comparable to juniper at all.
    Your forgetting the 20%~ extra transistors from the die shrink

    Plus all the extra from new features like pcie3 etc?

    Im sure if you take that all out its probably actially 10/20% difference if that makes sense?
    "Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
    //James

  4. #4
    Xtreme Mentor
    Join Date
    Apr 2003
    Location
    Ankara Turkey
    Posts
    2,631
    25 less raw power but still faster so what was bottlenecking VLIW. gtex gpix bw are same raw power is less so i dont think transistors explain this and also i dont think pcie2 is bottlenecking that much.


    When i'm being paid i always do my job through.

  5. #5
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    games wouldnt use all the raw power, think about how furmark would run on a 4800/5800, the cards would cry for more cooling, but no game really came close to using that kind of power

    i had a defective 4850 and never noticed until i played mass effect, cause it used about 80% of the gpu power, and confirmed it with furmark that it was having a problem. but no other game used the architeture enough to push it to that limit. i think 3dmark vantage is another app that likes the VLIW5 arch too. the newer generations lower the raw power but gain in actual usage.
    2500k @ 4900mhz - Asus Maxiums IV Gene Z - Swiftech Apogee LP
    GTX 680 @ +170 (1267mhz) / +300 (3305mhz) - EK 680 FC EN/Acteal
    Swiftech MCR320 Drive @ 1300rpms - 3x GT 1850s @ 1150rpms
    XS Build Log for: My Latest Custom Case

  6. #6
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    And if you would look at gpgpu apps the 5570 probably would destroy the 7770...

  7. #7
    Registered User
    Join Date
    Jun 2009
    Posts
    52
    Quote Originally Posted by Jamesrt2004 View Post
    Your forgetting the 20%~ extra transistors from the die shrink

    Plus all the extra from new features like pcie3 etc?

    Im sure if you take that all out its probably actially 10/20% difference if that makes sense?
    You dont need 20% extra transistors just from a die shrink. Even an increase of 5% is a lot.

    When HD4870 shrunk to HD5770, the difference was ''only'' 100 million transistors(~9%) and HD5770 added way more features like DX11, tesselation, OpenGL 4.2, OpenCL 1.1, eyefinity..

    An upgrade to DX11.1 and PCI-E3.0 wouldnt be any way near as big as the previous upgrade 2 years ago.

    All those extra transistors are for GCN and other things build around it like the extra schedulers for every CU. It makes the whole thing bigger so the least it can do is perform better.

  8. #8
    Xtreme Enthusiast
    Join Date
    May 2006
    Location
    over the rainbow
    Posts
    964
    Quote Originally Posted by Hornet331 View Post
    And if you would look at gpgpu apps the 5570 probably would destroy the 7770...
    No, the other way around. The HD 7770 can sometimes even beat the HD 6950 at GPGPU.
    AMD Phenom II X6 1055T@3.5GHz@Scythe Mugen 2 <-> ASRock 970 Extreme4 <-> 8GB DDR3-1333 <-> Sapphire HD7870@1100/1300 <-> Samsung F3 <-> Win8.1 x64 <-> Acer Slim Line S243HL <-> BQT E9-CM 480W

  9. #9
    Xtreme Member
    Join Date
    Apr 2010
    Location
    Budaors, Hungary.
    Posts
    143
    Just remember that DAAMIT ditched the fixed function tessellation unit with the HD5k series.

    "We are going to hell, so bring your sunblock..."

  10. #10
    I am Xtreme
    Join Date
    Jul 2007
    Location
    Austria
    Posts
    5,485
    Quote Originally Posted by w0mbat View Post
    No, the other way around. The HD 7770 can sometimes even beat the HD 6950 at GPGPU.
    most benchmark dont mean anything.. best example was mining... the HD5xxx trounced the HD6xxx despite it was suposed to be better (won more synthetics).

  11. #11
    Xtreme Enthusiast
    Join Date
    Jul 2004
    Posts
    535
    Quote Originally Posted by bladerash View Post
    When HD4870 shrunk to HD5770, the difference was ''only'' 100 million transistors(~9%) and HD5770 added way more features like DX11, tesselation, OpenGL 4.2, OpenCL 1.1, eyefinity..
    And halved the memory buss.

  12. #12
    Xtreme Member
    Join Date
    Jun 2005
    Location
    Bulgaria, Varna
    Posts
    447
    Quote Originally Posted by sutyi View Post
    Just remember that DAAMIT ditched the fixed function tessellation unit with the HD5k series.
    Tessellation is still done via dedicated hardware, at least in AMD's GPUs. The difference is that the programing model is now DX11 compliant with the inclusion of two new shader types -- Hull and Domain shaders, instead of using conventional vertex shader code in the older Radeons to program the tessellator.

  13. #13
    Xtreme Member
    Join Date
    Sep 2008
    Posts
    115
    didn't ati make changes to the ROPs? I'd expect more differences from that in gaming.

    Though the review shows that 7xxx card loses more performance with AA in most games.

  14. #14
    Registered User
    Join Date
    Apr 2004
    Location
    Finland
    Posts
    6
    Quote Originally Posted by Hornet331 View Post
    most benchmark dont mean anything.. best example was mining... the HD5xxx trounced the HD6xxx despite it was suposed to be better (won more synthetics).
    Datamining coins is one of few, if not only, GPGPU solutions that can actually harness VLIW close to it's full potential. It's an exception, not the norm.

  15. #15
    Xtreme Enthusiast
    Join Date
    Feb 2009
    Location
    Hawaii
    Posts
    611
    Quote Originally Posted by Hornet331 View Post
    most benchmark dont mean anything.. best example was mining... the HD5xxx trounced the HD6xxx despite it was suposed to be better (won more synthetics).
    Mining is in no way a good example of GPGPU. It's the one thing that the 5k and 6k lines excelled at due to the simplicity of the process. Most GPGPU apps in the future won't just be hashing. Anandtech had a few apps in their 7970 review that made this point.

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •