Wow ... you are talkative ..... this my last tonight, gotta get in bed.
In terms of noticeable difference.... again, that is in the eye of the beholder. Ironically, just a few days ago I was using Lightroom on a dual core rig for some quick touch ups and got very annoyed ... (this was an X6800) ... which at the time I was using that rig was darn fast, but I was annoyed because it wasn't as snappy or responsive.... my quad does it so much faster, and I notice it. Also, when I am transcoding or importing from a different video format into Premier or for simple routine stuff -- Pinnacle Studio 11 -- it is very noticable, especially when it takes 2x longer to build the DVD ... but see this is me, I prefer it faster... you may not be doing this level of computing.
In terms of GPGPU this is really a different topic, a different thread for debate -- but I don't see CUDA, for example, really taking off -- for a few reason which I will not elaborate on ... my personal opinion is that CUDA/nVidia will be victimized very much like AMD victimized Intanium.
For your video monitor commentary, I am correct on this you should do some more research ... google is your friend.... -- video monitors refresh the entire screen at typical 60 Hz that means each pixel within the field is updated in tandem 60 times (progressive), that is 60 frames in one second. Some monitors support higher, but the human brain cannot distinguish between frames that until about 20 FPS ... if you run the game full bore, your 60 Hz refresh will often capture the frame buffer in mid refresh since they are not synced -- this is what creates the lines and tearing in the image ... Vsync gives the smoothest, most enjoyable game play because each refresh of the monitor coincides with a completed frame buffer i.e. that is what it means to be sync'ed. We may be saying the same thing but differently ...![]()






... this my last tonight, gotta get in bed.
Reply With Quote

Bookmarks