http://www.tomshardware.com/news/nvi...ing,33762.html

nfortunately, the tools we normally use for testing graphics cards, such as Fraps, PresentMon, and OCAT, are not suitable for collecting performance data in virtual reality through HTC?s Vive or Oculus? Rift; they can measure frames that the game engine generates, but miss all of what happens afterwards in the VR runtime before those frames appear in the HMD. VRMark and VRScore get us part-way there with synthetic measurements, but we're usually more interested in making real-world comparisons.
Enter FCAT VR, which is conceptually similar to the original version, which we wrote about in Challenging FPS: Testing SLI And CrossFire Using Video Capture. This time, however, Nvidia is out to enable enthusiasts with a way to test their hardware in real-world VR apps, compile meaningful information, and present it in a way that doesn't require you to know your way around Perl.
What exactly are we looking for here? Nvidia does a pretty stellar job describing the VR pipeline and where things can go wrong, so we?ll borrow from the company?s explanation:
  1. The VR game samples the current headset position sensor and updates the camera position to correctly track a user?s head position.
  2. The game then establishes a graphics frame, and the GPU renders the new frame to a texture (not the final display).
  3. The VR runtime reads the new texture, modifies it, and generates a final image that is displayed on the headset display. Two interesting modifications include color correction and lens correction, but the work done by the VR runtime can be much more elaborate.