MMM
Page 7 of 8 FirstFirst ... 45678 LastLast
Results 151 to 175 of 196

Thread: AMD's smoothness factor put to the test by AMD & HardOCP...

  1. #151
    Xtreme Mentor
    Join Date
    Feb 2008
    Posts
    2,671
    Quote Originally Posted by ice_chill View Post
    You cannot use Windows task manager to see how load is being spread, I run applications that are definitely single threaded, and it shows up as using 25% on each core of a quad, instead of 100% on a single core.
    Which indicates to me that the workload of that application is being spread across every core, so multithreading is still working. Even if it doesnt increase performance, if reduces the CPU load on the main core which is good.

    Most games are like this, they will rarely even utilize 50% of a 2500k. Why is it that in most cases, AMD are significantly slower than Intel, even after all this time and enough opportunity for them to at least make something equivalent? AMD havnt had anything good enough for me since S939, and I dont fall for cheap marketting stunts that try and tell me that a game running at 30 fps on AMD is smoother than a game running at 60 fps on Intel.

    The first thing I can assume from the info was that Vsync was off, so frame desyncronization was happening.

  2. #152
    Xtreme Member
    Join Date
    Nov 2005
    Location
    Cape Town - South Africa
    Posts
    261
    I'm not in a position to say anything about AMD FX8150 or FX8120 performance which might make my comment null and most likely void. About three months ago I decided to go the Intel route again, after a couple of years of using AMD.

    I do not use my computer for video encoding or decoding or any other productive tasks except maybe assignments or reports for my studies. What my computer is mostly used for is gaming and overclocking. Going from an Phenom II 940 to 955 to 965 to 1090T to 1100T I've noticed very little difference when playing games. All these processors, except for the 940, were running at 4Ghz for everyday gaming. Jumping over to Intel, I got the 2600K and I really enjoy this processors, maybe more so becuase it is running 4.9Ghz for everyday use under custom air cooling.

    However, it is really difficult to find the difference between the AMD processors and the 2600K. The only notable difference I found was that after a couple hours of gaming on the 2600K I get stutters and teh only solution seems to be exiting the game and restarting the computer. Now I went from 4GB ram to 8GB ram and now I'm using 16GB ram. I have not gamed for many hours yet to see if the same thing happens with 16GB ram, but this never happened while I was gaming with my AMD setup. Now I might be doing somethig stupid in the setup which causes this, so I won't blame this solely on the 2600K processor. Like I said, I really enjoy the processor, except that I'm struggling to get it over 5.2Ghz, even under DICE. Performance is great when benching, but for everyday use and gaming I won't call this a win for the 2600K.

  3. #153
    Xtreme Enthusiast
    Join Date
    Dec 2005
    Location
    Northern Virginia
    Posts
    781
    I made a "downgrade" from a Core 2 Quad Q6700 with a 4850 to a Phenom II X3 720 because of heat/noise and the fact that I had less mouse lag on the menu screen for Fallout 3. Huge improvement. Sure FPS is a little lower, but i swear the average is higher, and there is zero lag. Even on an E8400 at 4.5 GHz I was getting lag. 2+ years later, i'm still rocking the tri core...and the only thing on my mind is an SSD because I'd like things to load a little quicker and could use a reinstall of Windows considering the beating this installation has taken over the past few years.
    Computer:
    Case: Corsair 750D Airflow Edition
    Mobo: Gigabyte Aorus X570 Ultra
    RAM: Team TForce Xtreem ARGB 3600C14 2x16gb XMP
    CPU: AMD Ryzen 5900x
    Graphics: EVGA (rip) RTX 3080 FTW3
    PSU: Seasonic Focus GX 850w
    Cooling: Arctic Liquid Freezer II 360mm
    NVMe: SKHynix P41 Platinum, Samsung 980 Pro 2tb
    SSD: Micron 1100 2TB, Samsung 860 Evo 1tb
    HDD: WD SE 2TB, WD Black 1tb 3 platter with over 10 years of power-on time

  4. #154
    Xtreme Cruncher
    Join Date
    Nov 2005
    Location
    Rhode Island
    Posts
    2,740
    Quote Originally Posted by bhavv View Post
    Which indicates to me that the workload of that application is being spread across every core, so multithreading is still working. Even if it doesnt increase performance, if reduces the CPU load on the main core which is good.
    There is no "main core" in a multi-core CPU. All windows is doing is shuffling the application rapidly from core to core. In my experience, when it starts doing that with a single threaded application, performance actually drops, and power usage goes up. Forcing the affinity to a single core will usually net a performance gain. Before Source went multi-threaded, it could be worth as much as 15-30 FPS in CSS forcing it to a single core.
    Fold for XS!
    You know you want to

  5. #155
    Xtreme Cruncher
    Join Date
    Jun 2006
    Posts
    6,215
    Quote Originally Posted by Brice MJ View Post
    According to this, gaming performance of 8150 is 52% lower than 2500K (on average). Price is 259/179=1.44 or 44% lower. So 8150 needs a price cut of a mere 50-60% to match gaming price/performance of 2500k.
    I don't see Phenom X4/X6 users having any less gaming "performance" than i5/i7 users. They (hardware.fr) just used lower resolution to bring up CPU difference.
    With highest end gfx card of today, FX is around 16% slower than 2500K in 19x12 resolution according to techpowerup. This is the most common resolution that most gamers use. And as many users of this forum that switched from AMD to intel already told you,there is little to no difference in games with modern CPUs like X6/FX and i5/i7.
    Last edited by informal; 01-27-2012 at 01:32 PM.

  6. #156
    Xtreme Mentor
    Join Date
    Jun 2008
    Location
    France - Bx
    Posts
    2,601
    Quote Originally Posted by informal View Post
    I don't see Phenom X4/X6 users having any less gaming "performance" than i5/i7 users. They (hardware.fr) just used lower resolution to bring up CPU difference.
    With highest end gfx card of today, FX is around 16% slower than 2500K in 19x12 resolution according to techpowerup. This is the most common resolution that most gamers use. And as many users of this forum that switched from AMD to intel already told you,there is little to no difference in games with modern CPUs like X6/FX and i5/i7.
    Before you post, read the article you quote ...

    Indeed we leave aside our beloved resolution 800 * 600 to provide you with scores in 1920 * 1080, while still looking quite heavy scenes that allow us to be limited by the processor performance.
    http://www.hardware.fr/articles/842-...cole-test.html

    Hardware.fr had choosen multithreaded applications only in order to present the FX on its best day. Regarding games, they selected scenes games (1920X1080) which are more CPU dependant than GPU. FX is between a Q9650 and a QX9770, far from i3-2100 ...

  7. #157
    Xtreme Mentor
    Join Date
    Feb 2009
    Location
    Bangkok,Thailand (DamHot)
    Posts
    2,693
    My 8120 @4.0 run skyrim smoother than 2500k @4.5
    Both are my pc s
    Intel Core i5 6600K + ASRock Z170 OC Formula + Galax HOF 4000 (8GBx2) + Antec 1200W OC Version
    EK SupremeHF + BlackIce GTX360 + Swiftech 655 + XSPC ResTop
    Macbook Pro 15" Late 2011 (i7 2760QM + HD 6770M)
    Samsung Galaxy Note 10.1 (2014) , Huawei Nexus 6P
    [history system]80286 80386 80486 Cyrix K5 Pentium133 Pentium II Duron1G Athlon1G E2180 E3300 E5300 E7200 E8200 E8400 E8500 E8600 Q9550 QX6800 X3-720BE i7-920 i3-530 i5-750 Semp140@x2 955BE X4-B55 Q6600 i5-2500K i7-2600K X4-B60 X6-1055T FX-8120 i7-4790K

  8. #158
    Xtreme Enthusiast
    Join Date
    Apr 2010
    Posts
    514
    My Pentium 4 run skyrim smoother than 2600k 5.0

  9. #159
    Xtreme Mentor
    Join Date
    Mar 2006
    Location
    Evje, Norway
    Posts
    3,419
    Quote Originally Posted by bhavv View Post
    Which indicates to me that the workload of that application is being spread across every core, so multithreading is still working. Even if it doesnt increase performance, if reduces the CPU load on the main core which is good.
    By that definition, superpi is also multithreaded and uses every core available....

    You can not just open taskmanager and look at the cpu usage to properly judge an aplication.
    Quote Originally Posted by iddqd View Post
    Not to be outdone by rival ATi, nVidia's going to offer its own drivers on EA Download Manager.
    X2 555 @ B55 @ 4050 1.4v, NB @ 2700 1.35v Fuzion V1
    Gigabyte 890gpa-ud3h v2.1
    HD6950 2GB swiftech MCW60 @ 1000mhz, 1.168v 1515mhz memory
    Corsair Vengeance 2x4GB 1866 cas 9 @ 1800 8.9.8.27.41 1T 110ns 1.605v
    C300 64GB, 2X Seagate barracuda green LP 2TB, Essence STX, Zalman ZM750-HP
    DDC 3.2/petras, PA120.3 ek-res400, Stackers STC-01,
    Dell U2412m, G110, G9x, Razer Scarab

  10. #160
    Xtreme Addict
    Join Date
    Mar 2009
    Posts
    1,116
    Quote Originally Posted by PatRaceTin View Post
    My 8120 @4.0 run skyrim smoother than 2500k @4.5
    Both are my pc s
    people say stuff like in this thread. but both cpus are so fast I'm super skeptical you can tell a difference. games are held back by the graphics card, and the cpu difference is always small...

  11. #161
    Xtreme Member
    Join Date
    Aug 2011
    Posts
    180
    Regarding statter after several hours of gaming, I have found that if the video card reaches high heat, it will produce statter, not the CPU. Several hours of gaming could produce enough heat inside the case to cause that.

    Also Intel has speedstep which adjusts the CPU ratio according to load, AMD does not have this feature and shoots the CPU speed from idle speed to full speed, the Intel's SpeedStep can actually impact performance as it tries to guess what frequency is required instead of giving the maximum frequency it can, simply disable this feature in BIOS.

  12. #162
    Xtreme Member
    Join Date
    Aug 2011
    Posts
    180
    Also regarding original numbers:
    System A (Intel Core i7-2700K) better: 40 votes
    System B (AMD FX-8150) better: 73 votes
    No difference: 28 votes


    40 people is quite a big number of people that thought Intel setup was running better.

  13. #163
    Xtreme Cruncher
    Join Date
    Jun 2006
    Posts
    6,215
    Quote Originally Posted by ice_chill View Post
    Also regarding original numbers:
    System A (Intel Core i7-2700K) better: 40 votes
    System B (AMD FX-8150) better: 73 votes
    No difference: 28 votes


    40 people is quite a big number of people that thought Intel setup was running better.
    Yes but 73 people is much much higher number, 82% higher than 40 people to be exact. That's almost double the amount of users who thought intel was smoother,much higher than margin of error.

  14. #164
    Xtreme Member
    Join Date
    Aug 2011
    Posts
    180
    Yes but the total number of people is 141, so about half thought AMD was better, but the fact that the other half didn't think so means the difference isn't blatantly noticeable.

    The question is why didn't the other half notice the smoothness.

  15. #165
    Registered User
    Join Date
    Jan 2007
    Posts
    24
    Quote Originally Posted by bamtan2 View Post
    people say stuff like in this thread. but both cpus are so fast I'm super skeptical you can tell a difference. games are held back by the graphics card, and the cpu difference is always small...
    Isn't this EXACTLY the point AMD are trying to demonstrate to people?

    Yeah, XYZ benchmark shows a +1,000,000 CPU score better but the CPU can't do anything with it. Looks to me like they are 'trying' to bring some real world perspective to the synthetic benchmarks......and i accept it's to get CPU sales through that.

    It's about time someone tried to match how pure benchmarking REALLY matters in real-life scenarios. Don't we all live in the real-world? Obviously not.

    What is disappointing is that the sample size wasn't big enough to make people accept the results (or even conclusively disprove the assertion) and focus on the debate of WHY this was the case instead of the debate being centered around 'i don't believe it so it must have been a fix for reasons a, b, or c'.

    I'm really interested in why brute force (cos that's what Intel is doing) is better than trying to be 'clever' (cos that's what AMD is doing otherwise they'd have copied Intel). I've got no idea which one is the best but would like to REALLY know why. I want to make INFORMED decisions.

    If the surrounding hardware holds back the main components of a PC then other than for the sport of benchmarking why does it matter if i have the absolute top-end CPU for somehting like gaming?

    I read the Techpowerup comparison of FX/Nehalem/SB on 7970 with interest because although the percentages difference at the end seemed rather large, when i had gone through the games looking at the FPS differences at the resolution i game at the differences weren't that big and apart from a few games (out of 15) the differences were negligible. Even the synthetic benchamrks were't much different yet at the end we have a 10 and 16 percent difference at the two highest resolutions in an overall 'summary'. I was really shocked at that so re-read it a few times to work out why my initial impression (during reading the article) was shattered by the final figures. These percentages were obviously skewed up by a handful of games that SB liked and FX didn't. Lies, damn lies and statistics.

    The numbers were correct but the overall impression they left were wrong. I think in a limited way AMD have just proved what Techpowerup found with numbers. Maybe this type of experiment needs to be done more and in a further expansive way to drive PCs and gaming improvements down a better path.

    The direction of these debates on here disappoints me because i see too much 'reason to fit hypothesis' statements rather than 'hypothesis drawn form reason' even though we have pages and pages in threads. More real debate please.

  16. #166
    Xtreme Addict
    Join Date
    Jan 2005
    Posts
    1,366
    Quote Originally Posted by Augustus View Post
    What is disappointing is that the sample size wasn't big enough to make people accept the results (or even conclusively disprove the assertion) and focus on the debate of WHY this was the case instead of the debate being centered around 'i don't believe it so it must have been a fix for reasons a, b, or c'.
    It's not like the sample size wasn't "big enough". It's about "by who and how" the test was done. The environment was fully controlled by AMD and who knows what they did there. If such "test" was done by Intel with exact opposite result would you believe it?
    Last edited by kl0012; 01-28-2012 at 11:18 AM.

  17. #167
    Xtreme Enthusiast
    Join Date
    Dec 2010
    Posts
    594
    Is one 7970 even enough for 3x1080p screens? According to my research that would get you around 30-50fps depending on detail settings. That is not smooth for a multiplayer 1st person shooter. The CPU didn't even matter there because the 7970 held them both back. For that reason alone the test is bogus.

  18. #168
    Xtreme Addict
    Join Date
    Dec 2002
    Posts
    1,250
    Just proves no practical difference for the average user.
    shows people dont buy with the wallet but their beliefs.
    4670k 4.6ghz 1.22v watercooled CPU/GPU - Asus Z87-A - 290 1155mhz/1250mhz - Kingston Hyper Blu 8gb -crucial 128gb ssd - EyeFunity 5040x1050 120hz - CM atcs840 - Corsair 750w -sennheiser hd600 headphones - Asus essence stx - G400 and steelseries 6v2 -windows 8 Pro 64bit Best OS used - - 9500p 3dmark11 (one of the 26% that isnt confused on xtreme forums)

  19. #169
    Registered User
    Join Date
    Jan 2007
    Posts
    24
    Quote Originally Posted by kl0012 View Post
    It's not like the sample size wasn't "big enough". It's about "by who and how" the test was done. The environment was fully controlled by AMD and who knows what they did there. If such "test" was done by Intel with exact opposite result would you believe it?
    Of course it wasn't big enough.....otherwise people wouldn't be arguing about how it was too low a sample number to be representative. Have you read and understood what people have been writing all over the 'net on this? The sample size was one of the first 'excuses' for dismissing this type of method out of hand. Then you came along and stated one of the other unfounded 'excuses'.....well done.

    By that rationale, let's dismiss every test and benchmark ever run because we don't really know if anything shady was done by anyone because they fully controlled the evironment in their testing. Sheesh.

    Again, we're back to arguing about the sematics of the test (fully controlled by AMD, what they could have done....blah blah blah) rather than whether this sort of testing is ACTUALLY valid. Why aren't you arguing for MORE people to do this sort of thing OPENLY and INDEPENDENTLY?? That way we'll KNOW if this is valid or not. That will tell you whether something shady was done by AMD here.......this is something you actually DO NOT KNOW yet you are so sure of your argument.

    We're not really moving the situation on by dismissing things out of hand without some REAL expansive adoption of this sort of testing to generate some knowledge instead of childish, yah-boo arguing.

    With the lack of curiosity or rational argument (again) a conspiracy theory with no real grounding to just throw something out there. Please go and look up the word 'debate'.

  20. #170
    Xtreme Addict
    Join Date
    Jan 2005
    Posts
    1,366
    Quote Originally Posted by Augustus View Post
    Of course it wasn't big enough.....otherwise people wouldn't be arguing about how it was too low a sample number to be representative. Have you read and understood what people have been writing all over the 'net on this? The sample size was one of the first 'excuses' for dismissing this type of method out of hand. Then you came along and stated one of the other unfounded 'excuses'.....well done.

    By that rationale, let's dismiss every test and benchmark ever run because we don't really know if anything shady was done by anyone because they fully controlled the evironment in their testing. Sheesh.

    Again, we're back to arguing about the sematics of the test (fully controlled by AMD, what they could have done....blah blah blah) rather than whether this sort of testing is ACTUALLY valid. Why aren't you arguing for MORE people to do this sort of thing OPENLY and INDEPENDENTLY?? That way we'll KNOW if this is valid or not. That will tell you whether something shady was done by AMD here.......this is something you actually DO NOT KNOW yet you are so sure of your argument.

    We're not really moving the situation on by dismissing things out of hand without some REAL expansive adoption of this sort of testing to generate some knowledge instead of childish, yah-boo arguing.

    With the lack of curiosity or rational argument (again) a conspiracy theory with no real grounding to just throw something out there. Please go and look up the word 'debate'.

    To argue about any benchmark, first the benchmark must be well defined, must have clear description what and how this benchmark measures and must be done by neutral party. Otherwise you may find a lot of "benchmarks" where iPad is faster then i7-3960X, but what is the point?

  21. #171
    c[_]
    Join Date
    Nov 2002
    Location
    Alberta, Canada
    Posts
    18,728
    Quote Originally Posted by kl0012 View Post
    To argue about any benchmark, first the benchmark must be well defined, must have clear description what and how this benchmark measures and must be done by neutral party. Otherwise you may find a lot of "benchmarks" where iPad is faster then i7-3960X, but what is the point?
    Testing party does not have to be neutral, only must have a truthful setup of the equipment (no software tweaks etc) and provide full disclosure of testing data and methodology.

    You dont have to like the results, but you should learn from them and perform your own tests if you think something shady was done. Thats what peer review is for.

    All along the watchtower the watchmen watch the eternal return.

  22. #172
    Xtreme Member
    Join Date
    Aug 2011
    Posts
    180
    Just recently: Dell shows a misleading image quality picture to flog graphics cards

    And now AMD is doing similar thing.

  23. #173
    Xtreme Addict
    Join Date
    Mar 2007
    Location
    United Kingdom
    Posts
    1,597
    This slower, but smoother debate always makes me giggle. We all know Intel make the better CPU's at the moment, so get over it and instead focus on promoting the actually very good and successful RADEON 79xx series of graphics cards

    here endeth my rant

    John
    Stop looking at the walls, look out the window

  24. #174
    Xtreme Addict
    Join Date
    Jan 2005
    Posts
    1,366
    Quote Originally Posted by STEvil View Post
    Testing party does not have to be neutral, only must have a truthful setup of the equipment (no software tweaks etc) and provide full disclosure of testing data and methodology.

    You dont have to like the results, but you should learn from them and perform your own tests if you think something shady was done. Thats what peer review is for.
    You can learn nothing from this benchmark. First of all this "benchmark" is not reproducible because it is built on "human opinion" and therefore can be easily rigged w/o any danger to be caught. Second, "human opinion" is not good measure metric because "human opinion" can be easily manipulated - there are a lot of techniques how to do it.

  25. #175
    Xtreme Mentor
    Join Date
    May 2008
    Posts
    2,554
    Quote Originally Posted by bhavv View Post
    Well this is with Skyrim running on my PC:


    Its definitely using every core, if only a little bit. I wouldnt call it a single threaded application.
    In the benchmarks that I've done Skyrim doesn't scale past two cores. Don't listen to taskmanager.

    Save the game in a cpu limited area, then walk around and record it with fraps benchmarking tool, reload that save and repeat, repeat again. Then go into bios and disable two cores, repeat benchmarking. Tell me if you see any difference, I didn't.

    Now if you play with clock speed the difference is massive. You can't tell me that a bulldozer cpu would feel "smoother" in Skyrim, or Fallout New Vegas, or a modded Fallout 3, or a modded Oblivion, or quite a few other games like ARMA 2 for that matter.

Page 7 of 8 FirstFirst ... 45678 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •