MMM
    X

    Subscribe to New Ads!

    Receive weekly ads in your inbox!



Results 1 to 18 of 18

Thread: [News/Rumor] AMD 16-core Ryzen a Multi-Chip Module of two "Summit Ridge" Dies

  1. #1
    Join XS BOINC Team StyM's Avatar
    Join Date
    Mar 2006
    Location
    Tropics
    Posts
    16,513

    [News/Rumor] AMD 16-core Ryzen a Multi-Chip Module of two "Summit Ridge" Dies

    https://www.techpowerup.com/231911/a...mit-ridge-dies

    With core performance back to competitiveness, AMD is preparing to take on Intel in the HEDT "high-end desktop" segment with a new line of processors that are larger than its current socket AM4 "Summit Ridge," desktop processors, but smaller in core-count than its 32-core "Naples" enterprise processors. These could include 12-core and 16-core parts, and the picture is getting clearer with an exclusive report by Turkish tech publication DonanimHaber. The biggest revelation here that the 12-core and 16-core Ryzen processors will be multi-chip modules (MCMs) of two "Summit Ridge" dies. The 12-core variant will be carved out by disabling 1 core per CCX (3+3+3+3).

    Another revelation is that the 12-core and 16-core Ryzen processors will be built in a new LGA package with pin-counts in excess of 4,000 pins. Since it's an MCM of two "Summit Ridge" dies, the memory bus width and PCIe lanes will be doubled. The chip will feature a quad-channel DDR4 memory interface, and will have a total of 58 PCI-Express gen 3.0 lanes (only one of the two dies will put out the PCI-Express 3.0 x4 A-Link chipset bus). The increase in core count isn't coming with a decrease in clock speeds. The 12-core variant will hence likely have its TDP rated at 140W, and the 16-core variant at 180W. AMD is expected to unveil these chips at the 2017 Computex expo in Taipei, this June, with product launches following shortly after.

  2. #2
    Xtreme Guru
    Join Date
    May 2007
    Location
    Ace Deuce, Michigan
    Posts
    3,956
    Makes sense. No reason to think otherwise; just curious if the L3 cache will be linked together or not.
    Quote Originally Posted by Hans de Vries View Post

    JF-AMD posting: IPC increases!!!!!!! How many times did I tell you!!!

    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    .....}
    until (interrupt by Movieman)


    Regards, Hans

  3. #3
    Xtreme Mentor
    Join Date
    Aug 2006
    Location
    HD0
    Posts
    2,646
    This is good for AMD. They're clearly able to play in the high end of the market once again. Are they the gamers' dream? No, but they're good enough in gaming and great just about everywhere else.

    The real question is, how well does it perform in the things I care about (data analysis in R and Python)

  4. #4
    Xtreme Guru
    Join Date
    May 2007
    Location
    Ace Deuce, Michigan
    Posts
    3,956
    Since when did python data analysis scripts take advantage of 32 threads? I feel like an SMT quadcore would be fine there.
    Quote Originally Posted by Hans de Vries View Post

    JF-AMD posting: IPC increases!!!!!!! How many times did I tell you!!!

    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    .....}
    until (interrupt by Movieman)


    Regards, Hans

  5. #5
    Pie assassin
    Join Date
    Nov 2007
    Location
    Where lights collide
    Posts
    2,271
    that would make it pretty interesting, otherwise not much different than a dual socket setup.
    Current Status - Testing & Research

  6. #6
    Xtreme Mentor
    Join Date
    Apr 2003
    Location
    Ankara Turkey
    Posts
    2,659
    Quote Originally Posted by AliG View Post
    Makes sense. No reason to think otherwise; just curious if the L3 cache will be linked together or not.
    Or can level4 cache be a more effective and low cost solution?

    But if they cant solve current problem disabling 1 core per ccx will increase the chance of hitting the same condition and 12c24t cpu will be a nightmare.


    When i'm being paid i always do my job through.

  7. #7
    Xtreme Enthusiast
    Join Date
    Feb 2009
    Location
    Hawaii
    Posts
    611
    Quote Originally Posted by AliG View Post
    Since when did python data analysis scripts take advantage of 32 threads? I feel like an SMT quadcore would be fine there.
    Since Numpy
    Xeon E3-1245 @ Stock | Gigabyte H87N-Wifi | 16GB Crucial Ballistix LP @ 1600Mhz | R7 260x | Much and varied storage

  8. #8
    Xtreme Guru
    Join Date
    May 2007
    Location
    Ace Deuce, Michigan
    Posts
    3,956
    Quote Originally Posted by Darakian View Post
    Since Numpy
    Ehh. I'd rather just use Matlab and vectorize. Much faster imo (especially if you leverage their GPU toolbox).
    Quote Originally Posted by Hans de Vries View Post

    JF-AMD posting: IPC increases!!!!!!! How many times did I tell you!!!

    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    .....}
    until (interrupt by Movieman)


    Regards, Hans

  9. #9
    Xtreme Member
    Join Date
    Apr 2008
    Location
    Stockholm, Sweden
    Posts
    324
    Quote Originally Posted by AliG View Post
    Since when did python data analysis scripts take advantage of 32 threads? I feel like an SMT quadcore would be fine there.
    Deep Learning Neural Networks.

  10. #10
    Xtreme Guru
    Join Date
    May 2007
    Location
    Ace Deuce, Michigan
    Posts
    3,956
    Quote Originally Posted by Eson View Post
    Deep Learning Neural Networks.
    Why would you EVER want to run a neural network off CPUs? That sounds like a horrible waste of power and efficiency.

    GPUs all the way. Not even close.
    Quote Originally Posted by Hans de Vries View Post

    JF-AMD posting: IPC increases!!!!!!! How many times did I tell you!!!

    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    .....}
    until (interrupt by Movieman)


    Regards, Hans

  11. #11
    Xtreme Addict
    Join Date
    Mar 2010
    Posts
    1,080
    Are we talking about REAL cores or are we still calling "core" this ALU + 0,5FPU abomination?

    Are this cores ALU+FPU or are we again at Bulldozer's modules?

  12. #12
    Xtreme Enthusiast
    Join Date
    Aug 2008
    Posts
    889
    Quote Originally Posted by El Mano View Post
    Are we talking about REAL cores or are we still calling "core" this ALU + 0,5FPU abomination?

    Are this cores ALU+FPU or are we again at Bulldozer's modules?
    Real cores. Basically 16 core = 2 Ryzen 7's (4 CCX's connected by "infinity fabric"); 12 core = 2 Ryzen 5's.
    Intel 8700k
    16GB
    Asus z370 Prime
    1080 Ti
    x2 Samsung 850Evo 500GB
    x 1 500 Samsung 860Evo NVME


    Swiftech Apogee XL2
    Swiftech MCP35X x2
    Full Cover GPU blocks
    360 x1, 280 x1, 240 x1, 120 x1 Radiators

  13. #13
    Xtreme Member
    Join Date
    Apr 2008
    Location
    Stockholm, Sweden
    Posts
    324
    Quote Originally Posted by AliG View Post
    Why would you EVER want to run a neural network off CPUs? That sounds like a horrible waste of power and efficiency.

    GPUs all the way. Not even close.
    Thus, the answer to the question "does python data analysis scripts take advantage of 32 threads?" is "Yes".

  14. #14
    Xtreme Guru
    Join Date
    May 2007
    Location
    Ace Deuce, Michigan
    Posts
    3,956
    Quote Originally Posted by Eson View Post
    Thus, the answer to the question "does python data analysis scripts take advantage of 32 threads?" is "Yes".
    I mean, I guess? But just because you on paper can do something doesn't mean it's very efficient (nor a good use of your money).

    I stand by what I said earlier. Get a quadcore and throw in a GPU for half the price difference and enjoy the best of both worlds.
    Quote Originally Posted by Hans de Vries View Post

    JF-AMD posting: IPC increases!!!!!!! How many times did I tell you!!!

    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    .....}
    until (interrupt by Movieman)


    Regards, Hans

  15. #15
    Moderator
    Join Date
    Oct 2007
    Location
    Oregon - USA
    Posts
    833
    Well aren't you boring
    Asus Rampage IV Extreme
    4930k @4.875
    G.Skill Trident X 2666 Cl10
    Gtx 780 SC
    1600w Lepa Gold
    Samsung 840 Pro 256GB


  16. #16
    Xtreme Enthusiast
    Join Date
    Feb 2009
    Location
    Hawaii
    Posts
    611
    Sure, you can retrain your entire staff to work on a new language on a new architecture with new fundamental concepts in program structure then buy a brand new cluster on which to run the resulting code.... or you could buy a cpu
    Xeon E3-1245 @ Stock | Gigabyte H87N-Wifi | 16GB Crucial Ballistix LP @ 1600Mhz | R7 260x | Much and varied storage

  17. #17
    Xtreme Guru
    Join Date
    May 2007
    Location
    Ace Deuce, Michigan
    Posts
    3,956
    Quote Originally Posted by Darakian View Post
    Sure, you can retrain your entire staff to work on a new language on a new architecture with new fundamental concepts in program structure then buy a brand new cluster on which to run the resulting code.... or you could buy a cpu
    Ehh, I think you're overstating how difficult it is. If automotive of all industries uses GPU rackmounts, I think others will be fine.
    Quote Originally Posted by Hans de Vries View Post

    JF-AMD posting: IPC increases!!!!!!! How many times did I tell you!!!

    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    .....}
    until (interrupt by Movieman)


    Regards, Hans

  18. #18
    Xtreme Member
    Join Date
    Apr 2008
    Location
    Stockholm, Sweden
    Posts
    324
    Quote Originally Posted by AliG View Post
    Ehh, I think you're overstating how difficult it is. If automotive of all industries uses GPU rackmounts, I think others will be fine.
    Those are most likely for CAE simulations; aerodynamics, air/fuel flows and thermal dynamics in the engine, collision simulations or digital production line test runs.

    A quick look on GPGPU processing and Deep Learning points to that GPUs are excellent for the training crunching but with the CPU to pre-process the data, there could be a gain of several times in GPU crunching speed. All with the Python library TensorFlow: https://www.tensorflow.org/tutorials/using_gpu

    So there are still uses for new, better multicore CPUs
    Last edited by Eson; 03-31-2017 at 02:34 AM.

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •