Results 1 to 10 of 10

Thread: [News] AMD Ryzen Infinity Fabric Ticks at Memory Speed

  1. #1
    Join XS BOINC Team StyM's Avatar
    Join Date
    Mar 2006
    Location
    Tropics
    Posts
    9,468

    [News] AMD Ryzen Infinity Fabric Ticks at Memory Speed

    https://www.techpowerup.com/231585/a...t-memory-speed

    Memory clock speeds will go a long way in improving the performance of an AMD Ryzen processor, according to new information by the company, which reveals that Infinity Fabric, the high-bandwidth interconnect used to connect the two quad-core complexes (CCXs) on 6-core and 8-core Ryzen processors with other uncore components, such as the PCIe root-complex, and the integrated southbridge; is synced with the memory clock. AMD made this revelation in a response to a question posed by Reddit user CataclysmZA.

    Infinity Fabric, a successor to HyperTransport, is AMD's latest interconnect technology that connects the various components on the Ryzen "Summit Ridge" processor, and on the upcoming "Vega" GPU family. According to AMD, it is a 256-bit wide bi-directional crossbar. Think of it as town-square for the chip, where tagged data and instructions change hands between the various components. Within the CCX, the L3 cache performs some inter-core connectivity. The speed of the Infinity Fabric crossbar on a "Summit Ridge" Ryzen processor is determined by the memory clock. When paired with DDR4-2133 memory, for example, the crossbar ticks at 1066 MHz (SDR, actual clock). Using faster memory, according to AMD, hence has a direct impact on the bandwidth of this interconnect.

  2. #2
    Xtreme Enthusiast
    Join Date
    Aug 2008
    Posts
    889
    So overclocking memory may be more difficult on Ryzen... I wonder if they will/can offer a multiplier adjustment to get higher memory speeds while keeping the Infinity Fabric stable.
    Intel 8700k
    16GB
    Asus z370 Prime
    1080 Ti
    x2 Samsung 850Evo 500GB
    x 1 500 Samsung 860Evo NVME


    Swiftech Apogee XL2
    Swiftech MCP35X x2
    Full Cover GPU blocks
    360 x1, 280 x1, 240 x1, 120 x1 Radiators

  3. #3
    Xtreme Mentor
    Join Date
    Aug 2006
    Location
    HD0
    Posts
    2,646
    Quote Originally Posted by StAndrew View Post
    So overclocking memory may be more difficult on Ryzen... I wonder if they will/can offer a multiplier adjustment to get higher memory speeds while keeping the Infinity Fabric stable.
    Just run tighter timings.

  4. #4
    I am Xtreme zanzabar's Avatar
    Join Date
    Jul 2007
    Location
    SF bay area, CA
    Posts
    15,871
    Quote Originally Posted by StAndrew View Post
    So overclocking memory may be more difficult on Ryzen... I wonder if they will/can offer a multiplier adjustment to get higher memory speeds while keeping the Infinity Fabric stable.
    i would guess no or they would have clocked the ccx link faster. ddr4 also has way more bandwidth than the latency lets you use so as xlink said just tighten up the timings. going the other way that the ccx needs to be faster the memory but since there is no multi will screw them on servers if they can get a reasonable clock speed.
    Last edited by zanzabar; 03-17-2017 at 12:41 PM.
    5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
    samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi

  5. #5
    Xtreme Mentor
    Join Date
    Aug 2006
    Location
    HD0
    Posts
    2,646
    Quote Originally Posted by zanzabar View Post
    i would guess no or they would have clocked the ccx link faster. ddr4 also has way more bandwidth than the latency lets you use so as xlink said just tighten up the timings. going the other way that the ccx needs to be faster the memory but since there is no multi will screw them on servers if they can get a reasonable clock speed.
    being a stickler here and using very precise language...

    DDR4's effective real world bandwidth is materially reduced by its latencies. Shifting timings can increase real world data transfer rates to an extent and help mitigate any losses from running at lower switching rates.


    Also remember:
    Latency is proportional to Timings/Clock Speed.

    If clock speed goes down, you suffer worse latency if you don't improve your timings.

    It should be possible to get to a point where improvements in overall memory performance are immaterial.

  6. #6
    I am Xtreme zanzabar's Avatar
    Join Date
    Jul 2007
    Location
    SF bay area, CA
    Posts
    15,871
    Quote Originally Posted by xlink View Post
    being a stickler here and using very precise language...

    DDR4's effective real world bandwidth is materially reduced by its latencies. Shifting timings can increase real world data transfer rates to an extent and help mitigate any losses from running at lower switching rates.


    Also remember:
    Latency is proportional to Timings/Clock Speed.

    If clock speed goes down, you suffer worse latency if you don't improve your timings.

    It should be possible to get to a point where improvements in overall memory performance are immaterial.

    real timings in ns and not divisors like we normally use (they are also annoying as to program new XMP profiles.) anyways with ddr4 the latency in ns is so high that you will likely never see huge diminishing returns on a timing set the memory or memory controller can run.
    5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
    samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi

  7. #7
    Xtreme Mentor
    Join Date
    Aug 2006
    Location
    HD0
    Posts
    2,646
    Quote Originally Posted by zanzabar View Post
    real timings in ns and not divisors like we normally use (they are also annoying as to program new XMP profiles.) anyways with ddr4 the latency in ns is so high that you will likely never see huge diminishing returns on a timing set the memory or memory controller can run.
    Do you mean cycles?

    In terms of ns, memory speed has been fairly stangant for a while.


    I do acknowledge that the timings for the older stuff that they list are horrible (e.g. I was hitting CAS 2/3 on DDR1 and CAS 3/4 on DDR2 and decent clocks)... with that said, latency IS becoming a bigger issue for doing MANY serial requests sequentially but it's not so bad for large/parallel requests (which are becoming more common)
    Last edited by xlink; 03-17-2017 at 01:42 PM.

  8. #8
    I am Xtreme zanzabar's Avatar
    Join Date
    Jul 2007
    Location
    SF bay area, CA
    Posts
    15,871
    cycle time is the inverse of the frequency, the true latency on that is how you measure when you program the kits. in a bios setting is way easier to have a divisor when you (for the sake of this example) multiply the cycle time by the timing divisor. when you look at it in ns you will see that everything is about the same latency (like your chart has) but if you go to a lower speed you can make up a bit lower latency. for example with overclocking we used to do cas 1.5 on ddr400 that would be 7.5ns, or or 2666 cas 14 is 10.5ns. the big gains you get in ddr4 are finding the best rank to rank and bank to bank third set timings. since ddr4 physically works very differently from ddr3 back, the chip to chip latency gives you amazing gains in real world benching.
    5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
    samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi

  9. #9
    Xtreme Enthusiast
    Join Date
    Aug 2008
    Posts
    889
    Maybe I read it wrong but what I was trying to imply was that your memory speed could, theoretically, be bound by the infinity fabric (am I the only one who hates that name?)
    Intel 8700k
    16GB
    Asus z370 Prime
    1080 Ti
    x2 Samsung 850Evo 500GB
    x 1 500 Samsung 860Evo NVME


    Swiftech Apogee XL2
    Swiftech MCP35X x2
    Full Cover GPU blocks
    360 x1, 280 x1, 240 x1, 120 x1 Radiators

  10. #10
    I am Xtreme zanzabar's Avatar
    Join Date
    Jul 2007
    Location
    SF bay area, CA
    Posts
    15,871
    Quote Originally Posted by StAndrew View Post
    Maybe I read it wrong but what I was trying to imply was that your memory speed could, theoretically, be bound by the infinity fabric (am I the only one who hates that name?)
    what would you gain with the memory having more bandwidth than the thing it is connected to. you would get all of the ddr3 socket 775 or nv chipset 775 ddr2 problems again. on intel stuff now they also limit the memory to the QPI (or whatever the buss is called now) so you cannot run the ram more than the thing that controls it.
    5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
    samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •