I stumbled across this option in RadeonPro...
Supposedly both Nvidia's and AMD's versions do about the same.
Found this thread about the issue (worth reading, IMO).
I read a bunch of threads. Some say that setting it to 2 removes stuttering in BFBC2, some say that unless you set it to 0 Oblivion is bound to lag...The default is 3 and usually increasing the number will decrease your framerate and increase responce time. The flipside is that everything will look smoother. Setting it to 0 will make things respond a little faster, but makes things choopy from my understanding. It is basically the number of frames the GPU renders in advance before displaying it on the screen.
Is this statement correct? This means that setting it to 0 would override Triple Buffering...I would assume the number of frames requested by this setting is in addition to the double-buffering that games typically use.
Or does it work like this:
Then again I don't think single buffering is quite possible, I gave a couple games a try at "0" and couldn't notice a dramatic difference...I assume that the conversion works like this, though the wording is confusing:
0 frames ahead = single buffering (not always possible, never a good idea).
1 frame ahead = double buffering (minimum suggested for 3D graphics).
2 frames ahead = triple buffering (probably best setting for most users)
3+ frames ahead will not increase frame rate, but will use extra video ram and increase latency.
One more quote from that thread to add some confusion:
OK, so my situation is: I have 2 Radeon cards in CFX, I force VSync, and I force Triple Buffering. What would be the best Flip Queue Size setting for me? And will I have to re-consider Triple Buffering (that depends on the answer to the previous question, I guess).It is not clear to me if the "flip-queue" is the same as nVidia's "frames to render ahead" setting, but it might be. For one thing, "render" implies something the GPU is doing, while the post indicates that "flip-queue" size sets how many frames the CPU can progress ahead of the GPU, or in other words, something the CPU is doing.
If they are the same thing, then I was totally wrong, and "frames to render ahead" has nothing to do with single/double/triple-buffering - and furthermore, it would always be a valid setting regardless of VSYNC. Of course, in that case, it is named incorrectly, since it how many frames ahead of the video card the CPU can get has nothing to do with how many frames are rendered ahead... which is why the only real way to figure out what the setting means is to test it.
I hate tearing (hence VSync on), but having extra input lag is definitely not something I want (hence the whole issue).
Anyway, feel free to share your ideas and experience.
Bookmarks