More HEREQuote:
Originally Posted by AnandTech
More HEREQuote:
Originally Posted by AnandTech
I sold one of my 7970s and went back to 680Sli in my main box due to this issue, and others. (E.G. groups of rocks flashing on an off in Crysis 3)
NVIDIA's drivers are just better, especially at multi GPU.
On the other hand, I stand by my earlier statements that the 7970s are the best deal I've ever gotten for a high end card. The XFX Black Edition I'm using in my second rig was $360 AMIR and came with two games I would have paid $50/ea for: Bioshock Infinite and Crysis 3. A card with comparable performance to the GTX680 for $260 is pretty freaking amazing in this era of $1000 high end cards.
Good article and it's nice to see AMD admitting the problem and working hard to fix it. The information about FRAPs and its limitations is very interesting, hopefully we see new and better tools come out soon for measuring this kind of performance.
They could be start by this :
HT ON ( core I7 + HT on )
http://img850.imageshack.us/img850/4...99c9d6d3e1.png
HT OFF ( Core I7 without HT )
http://img837.imageshack.us/img837/8...944a389c73.png
I dont tell you too what impact have core parking on some games . like BF3 under windows 7 / 8 .
A second solution for overclocker, is something we use in general naturally .. remove all "clocks" variation, CE states etc. 4.5-5ghz full speed all the time on all cores.
I will maybe advance myself but it look like some frametimes issue look simply linked to cores speed variation when HT is on / core parking / and the way windows handle threads in this case ( where you rarely goes over 25% usage on a core ). HT look to increase this situation. ( this dont mean there's no problem, but it increase the problem ) .
But reading this article it look like it really shown what i say when using fraps, you will not notice more or less "stutter", but fraps will record the input before enter the gpu, Direct3D and render pipeline and see this hicups of the HT and windows thread schedule.
http://img203.imageshack.us/img203/3...fa5ac5430c.png
http://img819.imageshack.us/img819/5...19c890c00b.png
http://img33.imageshack.us/img33/598...d158d833da.png
http://img844.imageshack.us/img844/1...00981ec809.png
http://img820.imageshack.us/img820/7...6b28600ce3.png
Why was Fraps a popular tool for measuring FPS to begin with? Would not a more optimal tool for frame grabbing like Dxtory be a more suitable choice?
http://dxtory.com/v2-home-en.html
It's incredibly cool that AMD is so honest about this issue.
I need agree with Anandtech for GPUview still.. i have use GPUPerfview ( AMD tool ) and Nvidia one + GPUview by the past, but im just a bit lost when it come to read correctly the data.
excellent article.
If windows is part of the problem push for open gl development and linux support,part of fixing the problem would be to remove windows limitations.
True but windows makes you work harder, the pre rendered frames cap plus
Quote:
Complicating all of this is the fact that Windows is not a real-time operating system, meaning that Windows cannot guarantee that it will execute any given command within a certain period of time. Essentially, Windows will get around to it when it can. In order to achieve the kind of millisecond level response time that applications and drivers need to ensure smoothness, Windows has to be over provisioned to make sure it has excess resources. Consequently this is part of the reason for why the context queue exists in the first place, to serve as a buffer for when Windows can?t get the next frame passed down quickly enough.
Agree. still we will need a complete pictures on Linux based system, different problem will ofc enter the equation. could be interessant.
Personally when i see all the "shot in the dark", aleatory results, differences from a game to another, from a cpu / system to another, some results look a little bit overstated by some peoples.
Hopefully they get it sorted by the time the 8 series comes out, i just seem to have a nose for skipping the problems and jumping in at the right time, a big tech change, avoid the first round.
Common linux distros like Ubuntu, Debian etc. are not real time either.
Amd supports Red Hat which seems to have real time computing.
Sorry but rtc OSes are not feasible for consumers. So blaming windows just for the sake of it is kinda of a joke.
Desktop computing is not mission critical and never will be aka the average users isn't interested in the possibility that one task is done under certain latency limitations, to make the one task run as low latency as possible, but many task are done in a fashion that work without noticeable perceived latency.
Got to disagree there. How many years has it been going on and now they come clean when they have the problem largely fixed for single cards and a fix 4 months out for CFx?
I read an interview with Tom Peterson at NVIDIA the other day where he said they've been optimizing for smooth frame delivery for a couple years now. (likely why Scott Wasson at Tech Report looked into it in the first place)
So 2 years after their competitor did this, and a year and a half after the tech review sites started reporting on it, "they are "honest" and fixed it".
I think it would be "cooler" if they would have a. thought of it on their own like NVIDIA did b. at least started working on it a year and a half ago c. not released the "Never Settle" drivers which were widely reported to increase the problem.
I'm very used to SLi in my boxes and largely "it just works". CFx with 7970s was not that kind of experience for me so now I'm down to one 7970.
I'd say this to AMD:
I like your single card performance and am glad this is fixed.
I very much dislike NVIDIA charging $1000. for cards that depreciate very rapidly.
I'd be happy to go all AMD if they can get the 8000 series out the door for $700 or less. I'd buy at least two, if not three.
Those things were mention in the article because there is a negative impact on how software acts in that environment,and was stated that more resources are needed because of it.So how is that blaming just to do it.Also there is plenty of capable people who want a gaming alternative to windows/osx.Dedicated gaming os like 360/ps3 will devastate consoles and may make them obsolete.:rofl:
Oh, so it was nvidia claiming to be "smoother" this time around? :p:
(I wonder how many people will get the admittedly bad joke :D)
Anyway, I can't say I've really noticed much jitter or whatever with either my 4870 or my 7950 :shrug:
Lanek >
Most gamers disable CE when they overclock. It would be nice to see CnQ effects when enable or disable on shuttering & micro shuttering.
And there is a patch for going windows in real time os. But never tryed. The problem with that, if your app crash, all sys crash too :D
and you should loose some performance, and comfort while using windows desk, apps switch should be less pleasant.
But i agree. Very old OS were real time or having a real time mode for gaming. And it was more sweet. They increased latency a lot with new rendering techs. I would prefer play an old carmageddon than a BF3 ... more fun, more fast ...
http://en.wikipedia.org/wiki/Carmageddon
old time ...
Companies of this magnitude are generally fairly closed-circuit in terms of releasing information about issues or even openly discussing them. You can of course argue that a fix is long over due or that bad choices had been made, this however does not change the fact that people have not been complaining about this in single-card setups which is probably because of it's negligible factor.
As far as dual-card setups are concerned I do not qualify to comment as I have very little experience here and the only negative factor I remember is the neglect of working profiles and/or support in games.
Frametimes testing - Air Superiority - B. Flats - 3770K @ 4GHz - GTX 680 @ 1181MHz - 1080p 2xMSAA 16xAF 2xTrMSAA - HT ON vs HT OFF
http://www.abload.de/img/bf3_air_superiority_n0yrvf.png
My testing obviously
I know this difference from system.. Its why use fraps is so aleatory ..
Hitman absolution: Gameplay maps ( not benchmark )
Chinatown 1920x1080p .. the highest possible settings in game, but i dont use a single card, in my case, i use 2x 7970 @ 1050mhz with AFR.
This is the type of result you see on sites with a single HD 7970 GHZ ( techreport, HH etc )..
http://img594.imageshack.us/img594/3...e8c976cb82.png
Now this is my result on the same game and same place ( this come from an old test i have done 1 month ago , maybe even more )
Catalyst 13.2 beta - 1920x1080P, all Maxed, using the exact same method ( i have scrupulously follow the HH frame times test in the game ) but with a large difference, who should bring extremely bad used, i use Crossfire enabled . no tricks on HT, no Coreparking disabled, ( CPU is just OC to a standard 4.5 - 5ghz if remember well ) Thoses tests have been conducted in January before 13.3 beta catalyst for Hitman frame fix.
http://img248.imageshack.us/img248/1148/132hma2.png
This is with CFX and AFR ( who commit this alternate frame rendering time ( but not frequencies ).. The result on MY system, are better of what is reported on sites with a single card ? ( note, there's no monitor yet who can show this variation yet, or only on " extreme case" , they have an input latency higher of the variation time between frame render with their hardware ( 8-9ms in best case, 16ms on average display monitor / the 144hz Asus27 " TN is around 6ms in best case )
the average frametimes is of 7.1ms .. the 997th percentile is at 12ms . I prey anybody to imagine been able using a monitor today to see a stuttering following this graph.. this is just impossible ( in this limited case )
I can tell the diff and + ID TECH lets you adjust frame time.
http://img5.imageshack.us/img5/620/d3frametime.jpg
http://www.anandtech.com/show/6862/f...marking-part-1
Looks like nVidia is taking the lead on new benchmarking tools, probably because right now they have the lead when it comes to stuttering :cool:
:confused:
As far as single gpus go AMD's 7xxx series seem to be far more consistent with frame latency than past cards. Older techreport reviews seem make that case anyways.
Microstutter has always been an issue. I'm super glad that AMD is finally attempting to do something about it.
I'm not sure I follow. From what I understand it is directly recording the output from the video card in order to look at the amount of time between frames. It seems to me that the point is to ignore everything that comes in front and focus only on the frame times that the end user sees. I can't speak to the analysis tools but from a characterization of stutter that a user experiences what would be a better approach?
Cliffs:
We realize there is a problem, it will eventually go away like our mouse bug.
DX9 has been around for 10 years but we still can't produce drivers that will get our hardware to work properly
Many DX10 titles are on the way! Stay tuned for more crappy driver releases.
Give us your money cause we reassured you stupid consumers.
Start @ 3.MIN
http://www.youtube.com/watch?v=hapCu...endscreen&NR=1
:confused:
What does the carmack interview have to do with the topic?
NVIDIA parts/drivers have worked out ways around the problems Carmack mentions with drivers and DX and AMD is starting to do the same.
read thread,He talked about latency issue,and having over spec hardware struggling compared to consoles.It is really noticed on console based games like sega rally revo,that thing skips w/o crossfire.Quote:
Desktop computing is not mission critical and never will be aka the average users isn't interested in the possibility that one task is done under certain latency limitations, to make the one task run as low latency as possible, but many task are done in a fashion that work without noticeable perceived latency.
From youVideo is couple of yrs old,they know but haven't fixed it yet and just now making a push.Quote:
Got to disagree there. How many years has it been going on and now they come clean when they have the problem largely fixed for single cards and a fix 4 months out for CFx
The main thing he said is drivers are his concern,I think it relates to thread.:shrug:
Exactly, it ignores the entirety of the rest of the issue. You can fix your drivers all you want but if the problem lies outside of the render calls (HT, core parking) there's not much you can do and relying only on this method may produce flawed results.
An example of a possible flawed result is the runt frames. Given the numbers that have been shown they dont all seem to line up as being what we've been told they are (all of the time).
Issues like that would probabily affect both SLI and Crossfire in the same way. Yet Crossfire has some serious issues that SLI doesn't have, so its the Crossfire implementation the one that is to blame considering that SLI scales quite good.
There is always, everywhere, people complaining about certain graphic glitchs, slowdowns, shuttering, whatever. What you see is what matters, end users usually doesn't really care about the inner workings of it for as long as it works properly. This technique to focus on the graphic output is what will help the most to properly see and identify all those issues, as with it you don't depend on a person perception to detect them (Not everyone notices), and get into a more scientific, exact method. Once you know that there is an issue somewhere, then you could use even more tools to properly identify what happens at the moment that the glitch/shutter occurs, to know what is the actual cause (Hardware, OS, Drivers, whatever), and try to fix it.
I'm very optimistic that with ths method, it will be posible to do before-and-after comparisions so you can see if an specific option (Like VSync), a new Drivers version, Software patch, or anything else, has a positive or negative impact. So it will be trackeable if there are actual improvements.
They can as seen in some HT vs non-HT results, however "can" is not "always".
Why is this method the most correct? Why cant another method which is not using visual perception work as well or better, given that it may actually also include this method as well as others? Why is one method which performs only one test better than multiple methods which test multiple issues?Quote:
There is always, everywhere, people complaining about certain graphic glitchs, slowdowns, shuttering, whatever. What you see is what matters, end users usually doesn't really care about the inner workings of it for as long as it works properly. This technique to focus on the graphic output is what will help the most to properly see and identify all those issues, as with it you don't depend on a person perception to detect them (Not everyone notices), and get into a more scientific, exact method.
You've basically jumped off an assumption cliff with your line of reasoning..
So you cant use other methods in your previous statement but then you suddenly can in the next? Hmm.. seems you've cliff jumped before if you're going to bring a parachute ;)Quote:
Once you know that there is an issue somewhere, then you could use even more tools to properly identify what happens at the moment that the glitch/shutter occurs, to know what is the actual cause (Hardware, OS, Drivers, whatever), and try to fix it.
I'm very optimistic that with ths method, it will be posible to do before-and-after comparisions so you can see if an specific option (Like VSync), a new Drivers version, Software patch, or anything else, has a positive or negative impact. So it will be trackeable if there are actual improvements.
good article , very interesting .
Thumbs up AMD for admintting this issue (finally) .
I don't know if "visual perception" is actually the correct name for it, but I'm referring to the same thing responsible that not everyone notices motion the same, or is sensible at the same degree. Some people is more sensitive and notice shuttering or tearing where you wouldn't notice anything, in the same way that some people couldn't notice the difference during the CRT era of playing a game @ 75 Hz or @ 100 Hz assuming that you had enough Hardware to draw all the frames flawlessly - the frames are there, but not everyone notices them. Its basically what AMD said in the first Anandtech article:
As you're recording the output of the GPU, that is what would be displayed in the Monitor, you could analyse frame by frame to check for inconsistency. You don't rely anymore on someone's visual perception/sensivity for motion, as you have proper tools to check for it. Maybe a guy doesn't notice the effect of tearing due a runt frame like this one (From here), but that frame WAS on screen for a few miliseconds.Quote:
In our discussion with AMD, AMD brought up a very simple but very important point: while we can objectively measure instances of stuttering with the right tools, we cannot objectively measure the impact of stuttering on the user. We can make suggestions for what’s acceptable and set common-sense guidelines for how much of a variance is too much – similar to how 60fps is the commonly accepted threshold for smooth gameplay – but nothing short of a double-blind trial will tell us whether any given instance of stuttering is noticeable to any given individual.
If you have any doubts, you can as well re-read all the, Tech Report, PCPer and Anandtech articles about this matter and why they're using this new method.
You have both AMD and nVidia reasons about why the other current method to detect frame delay, FRAPS, isn't accurate enough, and why reviewers have gone thorough this and this to set up for this new method of analyzing the direct output.
If you don't like it, I'm sure there are many people besides myself that would die to know what method you have in mind that is better or more correct.
Should I give you my parachute? ;)
I very much agree with this statement. I sure there can be, and probably will be, other tools in the future that give even deeper insight into the rendering pipeline and what effects each part of the system is having. However, for the near term I think this is a very large step forward from solely using FRAPS as a benchmarking tool and helps explain a lot of what people were complaining about but unable to show from FRAPS results.
My bad, I missed your point. It's a good explanation of why stuttering exists in pc gaming and why companies need to work around it. I thought you were trying to illustrate the latency and stutter "have" to exist in pc gaming, when NVIDIA has apparently figured out a way around it and AMD is well into the process.
I see this issue as a good thing for pc gaming in general. PCs have always had the advantage for resolution and image enhancement, if all graphics solutions deliver smooth animation it can only be good for the industry.
If anyone is interested, over at Guru3D, Hilbert has been working on a splitter and frame capture device/system (right before it goes to the screen) he bought some serious hardware for it but is running into the issue of a storage system that can record the high write speeds without dropping frames.