Page 1 of 2 12 LastLast
Results 1 to 25 of 40

Thread: Nvidia drivers are handicapping Ryzens performance, says new analysis by AdoredTV

  1. #1
    Registered User
    Join Date
    May 2009
    Location
    Amsterdam
    Posts
    45

    Nvidia drivers are handicapping Ryzens performance, says new analysis by AdoredTV



    So what does this mean?
    What seems a Ryzen CPU bottleneck, is actually a Nvidia driver bottleneck in this particular game.
    Nvidias drivers lower AMD Ryzens performance, by not giving enough threads to the Ryzen CPU, unlike AMD's drivers.
    What do you all think, on purpose?

  2. #2
    Xtreme Guru
    Join Date
    May 2007
    Location
    Ace Deuce, Michigan
    Posts
    3,955
    Meh more like laziness. What prior incentive did Nvidia have to optimize AMD CPU performance?

    Now that Ryzen seems to be competitive with Intel, you may see that change (just to prevent people from going AMD gfx). But no, I don't think there's too much to this.
    Quote Originally Posted by Hans de Vries View Post

    JF-AMD posting: IPC increases!!!!!!! How many times did I tell you!!!

    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    .....}
    until (interrupt by Movieman)


    Regards, Hans

  3. #3
    Xtreme Owner Charles Wirth's Avatar
    Join Date
    Jun 2002
    Location
    Las Vegas
    Posts
    11,653
    Good watch, makes more sense now.
    Intel 9990XE @ 5.1Ghz
    ASUS Rampage VI Extreme Omega
    GTX 2080 ti Galax Hall of Fame
    64GB Galax Hall of Fame
    Intel Optane
    Platimax 1245W

    Intel 3175X
    Asus Dominus Extreme
    GRX 1080ti Galax Hall of Fame
    96GB Patriot Steel
    Intel Optane 900P RAID

  4. #4
    Xtreme Enthusiast
    Join Date
    Oct 2012
    Posts
    687
    Yeah, it is pretty enlightening.
    However it seems that it should help all more than 4 core cpu`s and only in dx12.
    Even bulldozer should get a healthy boost :-D.
    Someone should do dx12 480 cf ryzen review.Or at the very least, when vega comes out do a comparison.
    Intel 5960X@4.2Ghz[Prime stable]@4.5 [XTU stable] 1.24v NB@3.6ghz Asrock X99 Extreme 3 4x8GB Corsair Vengeance@3200 16-17-17
    Sapphire nitro+ VEGA 56 Samsung SSD 850 256GB Crucial MX100 512GB HDD:WD10TB WD:8TB Seagate8TB

  5. #5
    I am Xtreme zanzabar's Avatar
    Join Date
    Jul 2007
    Location
    SF bay area, CA
    Posts
    15,871
    Quote Originally Posted by vario View Post
    Yeah, it is pretty enlightening.
    However it seems that it should help all more than 4 core cpu`s and only in dx12.
    Even bulldozer should get a healthy boost :-D.
    Someone should do dx12 480 cf ryzen review.Or at the very least, when vega comes out do a comparison.
    where are the BD chips with more than 4 cores, are people gaming on g34. i dont see 4 threaded float work getting a boost on BD since if it boosted int threaded work they would have done it to help HT parts. lets just forget BD existed and move on so we dont taint the ryzen.
    5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
    samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi

  6. #6
    Xtreme X.I.P. Particle's Avatar
    Join Date
    Apr 2008
    Location
    Kansas
    Posts
    3,219
    Oh look, someone wants to argue about the definition of a core.
    Particle's First Rule of Online Technical Discussion:
    As a thread about any computer related subject has its length approach infinity, the likelihood and inevitability of a poorly constructed AMD vs. Intel fight also exponentially increases.

    Rule 1A:
    Likewise, the frequency of a car pseudoanalogy to explain a technical concept increases with thread length. This will make many people chuckle, as computer people are rarely knowledgeable about vehicular mechanics.

    Rule 2:
    When confronted with a post that is contrary to what a poster likes, believes, or most often wants to be correct, the poster will pick out only minor details that are largely irrelevant in an attempt to shut out the conflicting idea. The core of the post will be left alone since it isn't easy to contradict what the person is actually saying.

    Rule 2A:
    When a poster cannot properly refute a post they do not like (as described above), the poster will most likely invent fictitious counter-points and/or begin to attack the other's credibility in feeble ways that are dramatic but irrelevant. Do not underestimate this tactic, as in the online world this will sway many observers. Do not forget: Correctness is decided only by what is said last, the most loudly, or with greatest repetition.

    Rule 3:
    When it comes to computer news, 70% of Internet rumors are outright fabricated, 20% are inaccurate enough to simply be discarded, and about 10% are based in reality. Grains of salt--become familiar with them.

    Remember: When debating online, everyone else is ALWAYS wrong if they do not agree with you!

    Random Tip o' the Whatever
    You just can't win. If your product offers feature A instead of B, people will moan how A is stupid and it didn't offer B. If your product offers B instead of A, they'll likewise complain and rant about how anyone's retarded cousin could figure out A is what the market wants.

  7. #7
    Xtreme Enthusiast
    Join Date
    Oct 2012
    Posts
    687
    Quote Originally Posted by zanzabar View Post
    where are the BD chips with more than 4 cores, are people gaming on g34. i dont see 4 threaded float work getting a boost on BD since if it boosted int threaded work they would have done it to help HT parts. lets just forget BD existed and move on so we dont taint the ryzen.
    Well, maybe i should word it differently, bulldozers neither do have 4 cores nor 8. They have 4 modules.
    TBH i dont know if they would get a boost, but there is a possibility.Anyhow, any intel "morecores" cpus should get a boost and maybe even phenom II x6 ;-)
    And im gonna bet you there is someone out there gaming on a g34 :P
    Intel 5960X@4.2Ghz[Prime stable]@4.5 [XTU stable] 1.24v NB@3.6ghz Asrock X99 Extreme 3 4x8GB Corsair Vengeance@3200 16-17-17
    Sapphire nitro+ VEGA 56 Samsung SSD 850 256GB Crucial MX100 512GB HDD:WD10TB WD:8TB Seagate8TB

  8. #8
    I am Xtreme zanzabar's Avatar
    Join Date
    Jul 2007
    Location
    SF bay area, CA
    Posts
    15,871
    Quote Originally Posted by Particle View Post
    Oh look, someone wants to argue about the definition of a core.
    in this case it has to do with multiple floating point threads so it does matter. this was also brought up last year with socket 2011 not using all of the cores on NV cards while amd cards did.

    Quote Originally Posted by vario View Post
    Well, maybe i should word it differently, bulldozers neither do have 4 cores nor 8. They have 4 modules.
    TBH i dont know if they would get a boost, but there is a possibility.Anyhow, any intel "morecores" cpus should get a boost and maybe even phenom II x6 ;-)
    And im gonna bet you there is someone out there gaming on a g34 :P
    4 cores with SMT is rather spot on for what BD was. I also was not disagreeing with the more cores since lazy threads were a problem on socket 2011 with NV and dx12.


    back when g34 came out we were trying to get amd to sell an unlocked one, but they never did. i dont even think people like Dave even had engineering samples that were unlocked. lets home SR will come in a reasonable spec for desktop use.
    Last edited by zanzabar; 03-31-2017 at 12:49 PM.
    5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
    samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi

  9. #9
    Xtreme Mentor
    Join Date
    Aug 2006
    Location
    HD0
    Posts
    2,646
    This has been seen for a while


    AMD GPUs performed nearly equally on AMD and Intel set ups (when they were the bottleneck)
    nVidia GPUs performed better on Intel by a non-negligible amount.
    Last edited by xlink; 03-31-2017 at 01:13 PM.

  10. #10
    Registered User
    Join Date
    May 2009
    Location
    Amsterdam
    Posts
    45
    Quote Originally Posted by xlink View Post
    This has been seen for a while


    AMD GPUs performed nearly equally on AMD and Intel set ups (when they were the bottleneck)
    nVidia GPUs performed better on Intel by a non-negligible amount.
    What is shady to me is how cpu usage on a Geforce system is that much higher with an intel cpu than with a Ryzen cpu. Nvidia drivers basicly ignore half the cores.

    It is an effective way to make ryzen gaming look worse than it really is, what review websites did not use nvidia hardware?

  11. #11
    One-Eyed Killing Machine
    Join Date
    Sep 2006
    Location
    Inside a pot
    Posts
    6,340
    Thread title is misleading, performance is "hindered" on Intel as well as shown in the vid.

    What I don't see is overclocking the 7700K + RX480 SLI setup to see if there's a GPU bottleneck instead of a driver like claimed.
    Coding 24/7... Limited forums/PMs time.

    -Justice isn't blind, Justice is ashamed.

    Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P. ), Juan J. Guerrero

  12. #12
    I am Xtreme zanzabar's Avatar
    Join Date
    Jul 2007
    Location
    SF bay area, CA
    Posts
    15,871
    Quote Originally Posted by BenchZowner View Post
    Thread title is misleading, performance is "hindered" on Intel as well as shown in the vid.

    What I don't see is overclocking the 7700K + RX480 SLI setup to see if there's a GPU bottleneck instead of a driver like claimed.
    it would be better to test with a socket 2011 and then disable cores.
    5930k, R5E, samsung 8GBx4 d-die, vega 56, wd gold 8TB, wd 4TB red, 2TB raid1 wd blue 5400
    samsung 840 evo 500GB, HP EX 1TB NVME , CM690II, swiftech h220, corsair 750hxi

  13. #13
    Xtreme Enthusiast
    Join Date
    Aug 2008
    Posts
    889
    Quote Originally Posted by Ursus View Post
    What is shady to me is how cpu usage on a Geforce system is that much higher with an intel cpu than with a Ryzen cpu. Nvidia drivers basicly ignore half the cores.

    It is an effective way to make ryzen gaming look worse than it really is, what review websites did not use nvidia hardware?
    Because 4 core, 8 thread Intel vs 8 core 16 thead AMD CPU.

    I think people use NVIDIA primarily as they offer the best GPUs. No conspiracy here. It's pretty well known NVIDIA drivers don't do well with maximizing thread count in DX12. Compounded with the new Ryzen architecture, looks like NVIDIA has some work to do.

    Be interesting to see how quickly they try to fix drivers for a competitors CPU.
    Last edited by StAndrew; 03-31-2017 at 07:32 PM.
    Intel 8700k
    16GB
    Asus z370 Prime
    1080 Ti
    x2 Samsung 850Evo 500GB
    x 1 500 Samsung 860Evo NVME


    Swiftech Apogee XL2
    Swiftech MCP35X x2
    Full Cover GPU blocks
    360 x1, 280 x1, 240 x1, 120 x1 Radiators

  14. #14
    Xtremely High Voltage Sparky's Avatar
    Join Date
    Mar 2006
    Location
    Ohio, USA
    Posts
    16,040
    Should be "Currently offer the best GPUs."

    It isn't always the case. Just saying

    It does seem to me that nvidia's drivers need work. Something's just messed up with them. Shows worse on Ryzen, but still shows some on the Intel systems too.
    The Cardboard Master
    Crunch with us, the XS WCG team
    Intel Core i7 2600k @ 4.5GHz, 16GB DDR3-1600, Radeon 7950 @ 1000/1250, Win 10 Pro x64

  15. #15
    Registered User
    Join Date
    May 2009
    Location
    Amsterdam
    Posts
    45
    Quote Originally Posted by BenchZowner View Post
    Thread title is misleading, performance is "hindered" on Intel as well as shown in the vid.
    Watch again. The video shows how going from Radeon to Geforce decreases Ryzen performance way more than it does Intel.

    Nvidia drivers are "handicapping" Ryzen performance, handicapping either because of incompetence or something else.

    I guess after years of nvidia optimizing their drivers for Intel platforms this was to be expected. Hope this issuehas some priority for them.

  16. #16
    One-Eyed Killing Machine
    Join Date
    Sep 2006
    Location
    Inside a pot
    Posts
    6,340
    There are plenty of variables that you and adoredtv are ignoring.
    Starting from the very fact that the comparison starts with a dual-gpu setup vs a single-gpu setup.
    Coding 24/7... Limited forums/PMs time.

    -Justice isn't blind, Justice is ashamed.

    Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P. ), Juan J. Guerrero

  17. #17
    Xtreme Enthusiast
    Join Date
    Oct 2012
    Posts
    687
    Quote Originally Posted by BenchZowner View Post
    There are plenty of variables that you and adoredtv are ignoring.
    Starting from the very fact that the comparison starts with a dual-gpu setup vs a single-gpu setup.
    I was thinking about that too, someone should check SLI 1070`s .More gpus=more core usage ?
    It could be.
    Intel 5960X@4.2Ghz[Prime stable]@4.5 [XTU stable] 1.24v NB@3.6ghz Asrock X99 Extreme 3 4x8GB Corsair Vengeance@3200 16-17-17
    Sapphire nitro+ VEGA 56 Samsung SSD 850 256GB Crucial MX100 512GB HDD:WD10TB WD:8TB Seagate8TB

  18. #18
    Registered User
    Join Date
    May 2009
    Location
    Amsterdam
    Posts
    45
    Quote Originally Posted by BenchZowner View Post
    There are plenty of variables that you and adoredtv are ignoring.
    Starting from the very fact that the comparison starts with a dual-gpu setup vs a single-gpu setup.
    I don't think that matters for the point they are making, here's how I see the logic.

    1. On a 1070 GTX using 1080P, in what SEEMS to be a CPU limited scenario, the Ryzen scores about 30% lower fps than the 7700K.
    2. On a Radeon Xfire 480, the 7700K score increases by about 10% (meaning the Geforce 1070 score on the Intel system was actually GPU limited), but the Ryzen score increases by more than 35%, making the Ryzen system about 3-4% slower than the 7700K system.

    The point is, the limiting factor are the Nvidia drivers, not the Ryzen CPU.
    Now if this problem does not exist for nvidia in SLI, it still means the low Ryzen score is not a cpu bottleneck, because the cpu is able to offer better performance in a different scenario.
    To draw final conclusions someone should test Ryzen with SLI, or Ryzen vs Intel on a single 480RX vs a Geforce 1060 on an extremely low resolution to ensure the bottleneck is the CPU.

  19. #19
    One-Eyed Killing Machine
    Join Date
    Sep 2006
    Location
    Inside a pot
    Posts
    6,340
    Since youtube took over, everybody became a... very educated reviewer...

    Most of the vids on youtube are very technically lacking, and the tests are most of the times inconsistent and inconclusive because they always miss something.
    Coding 24/7... Limited forums/PMs time.

    -Justice isn't blind, Justice is ashamed.

    Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P. ), Juan J. Guerrero

  20. #20
    Xtreme Mentor
    Join Date
    Apr 2003
    Location
    Ankara Turkey
    Posts
    2,631
    nvidia doing this intentionally is just what i expect from nvidia. it will be surprise if they dont do something like this imo.


    When i'm being paid i always do my job through.

  21. #21
    Xtreme Enthusiast
    Join Date
    Feb 2010
    Posts
    578
    Quote Originally Posted by kromosto View Post
    nvidia doing this intentionally is just what i expect from nvidia. it will be surprise if they dont do something like this imo.
    But why? Nvidia is not competing with AMD in the x86 marketspace. They compete in the dGPU arena. Making it harder for Ryzen owners to run Nvidia graphics cards makes absolutely no sense . . . unless Intel is giving them kickbacks or something.

    It's more likely that Nvidia spends more time optimizing for Intel systems since AMD's installed base is comparatively small.

  22. #22
    Xtreme Mentor
    Join Date
    Apr 2003
    Location
    Ankara Turkey
    Posts
    2,631
    I am a member of PC community since 1996 (before a proud Amiga user ) and being attracted to hardware since 2002. From all those years I remember things like Nvidia cheating on drivers to increase score on 3dmark or sparing physx for itself only or Nvidia not implementing a directx version completely and/or because of this forces game companies to disable features about that directx version or that stupid gsync while there is something like freesync etc. etc. They just like to play dirty and nonsense so I expect anything from them.


    When i'm being paid i always do my job through.

  23. #23
    Registered User
    Join Date
    May 2009
    Location
    Caldas da Rainha, Portugal
    Posts
    38
    Quote Originally Posted by Ursus View Post
    I don't think that matters for the point they are making, here's how I see the logic.

    1. On a 1070 GTX using 1080P, in what SEEMS to be a CPU limited scenario, the Ryzen scores about 30% lower fps than the 7700K.
    2. On a Radeon Xfire 480, the 7700K score increases by about 10% (meaning the Geforce 1070 score on the Intel system was actually GPU limited), but the Ryzen score increases by more than 35%, making the Ryzen system about 3-4% slower than the 7700K system.

    The point is, the limiting factor are the Nvidia drivers, not the Ryzen CPU.
    Now if this problem does not exist for nvidia in SLI, it still means the low Ryzen score is not a cpu bottleneck, because the cpu is able to offer better performance in a different scenario.
    To draw final conclusions someone should test Ryzen with SLI, or Ryzen vs Intel on a single 480RX vs a Geforce 1060 on an extremely low resolution to ensure the bottleneck is the CPU.
    That's what i got from it: it seems nVidia drivers are a big variable when testing CPU's bottlenecking, which totally defeats the purpose of the tests.

    No need for SLI or crossfire since it introduces other variables, like scaling, IMO: just test with a good card from each manufacturer.

  24. #24
    Registered User
    Join Date
    May 2009
    Location
    Amsterdam
    Posts
    45
    Quote Originally Posted by BenchZowner View Post
    Since youtube took over, everybody became a... very educated reviewer...

    Most of the vids on youtube are very technically lacking, and the tests are most of the times inconsistent and inconclusive because they always miss something.
    I do not understand you.
    You don't agree with the word "handicapped" you claim me and others are ignoring "variables" and when I explain myself you come with this vague story about everyone on youtube being wrong?

    Please point out where the reviewer is wrong, or where my logic is wrong. I really am open to other opinions but with all the info I see, best case scenario is that nvidia failed to implement true cpu scalability in their drivers, worst case scenario is that nvidia is sabotaging AMD.

    I lean towards the first, but history teaches us that nvidia is capable of orchestrating the second possibility as well.

  25. #25
    Registered User
    Join Date
    May 2009
    Location
    Caldas da Rainha, Portugal
    Posts
    38
    This can even be proved using just ONE CPU (but one card from each manufacturer): just run RotTR using DX11 and DX12 with settings you're absolutely sure will bottleneck the CPU with both cards @ stock and then with the highest overclock on the cards you can get.

    If the CPU is bottlenecked "properly", then going from a stock nVidia card to an OCed one should yield margin of error differences and the same should be true for stock AMD card VS OCed one but, if comparisons between manufacturers are allot higher then margin of error, then you'll have your proof right there.

    Even if this turns out to be true, it could still be game specific, in which case, more games tested are required for a better "sampling" of this behaviour.

Page 1 of 2 12 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •