I haven't seen any >60 Hz 2560 x 1600 displays, and mine is just a little over 1 yo :(
Printable View
not even the same book :hehe:
im not concerned with video, but with games rendering.
anywho im getting assassins creed and grid tomorrow cos i need to play :D
...but i see your point is that the youtube video would not be able to show microstuttering due to the 30fps limit thing... well tbh i only noticed some major lag points (mega stutter:D)where the rendered frames seem to get "stuck"...in the youtube video with gtx280 tri sli.
:).
I read on GPU cafe that xs resident Sampsa already has a 4870x2 in his hands? where are the benchmarks SAMPSA!!!
http://gpucafe.com/?p=19
break NDA!
kiddin
It demolishes the game play only in FPS games, because your mouse always more responsive than the input lag. The input lag is usually something like 200-300 ms, but a good mouse can respond to your input movement way faster than that. vsync is perfectly acceptable with games that you play with a keyboard, a controller, or a racing wheel.
You are comparing crossfire to SLI though, while both have micro stutter it's like apple to oranges. Crossfire might handle AFR in GRID much better than SLI. And since you say it is almost there with vsync off, I can deduce that there is a world of a difference because with SLI it is not even close to "there".
Input lag is basically when you move your mouse, and your crosshair moves 0.2 seconds after you. Trust me it might not seem like a big deal, but try playing a multiplayer fps game with vsync on, and you will see how much of a disadvantage it is. It only happens in games that run above your refresh rate, such as a game running at 100 fps, where your refresh rate is 60. The explanation to this is below.
That's got absolutely nothing to do with it... :confused:
No matter what your refresh rate is, if your fps at or slightly above your refresh rate, you will get tears. Lets say you run at 100 Hz, and your game is running at 102 fps, you will get tears, because those 2 extra frames will be blended with 2 other frames to make 100 refreshes. That's why you enable vsync, so that you can discard extra frames and avoid tears. But these discarded frames are what introduce the input lag. The more fps your card renders, the more are going to be discarded, the more input lag you will get.
I would recommend you to post it.
It's hard data, arguments and fanboi backlash can come later
Edit: Not sure if this has been posted yet, nor do I know of the credibility of this piece of news
http://www.techpowerup.com/65382/R70...e_GTX_280.html
"Faster" in what sense? Pure FPS?
Not to spread my woes but, I'm waiting for a gfx card for 3 months now and without a desktop rig since 2004.... :(
Given my incomplete hardware and desperate situation, do you guys think its blatantly idiotic of me if I can't wait for another month for 4870x2? and instead get 2x 4870?
Quote:
The performance of R700 will 80% better than GTX280
As we know, AMD new generation flagship product R700 will be better than NVIDIA GTX280.
R700 (Radeon HD4870X2) 2GB will be 50% better than GTX280 at average while it will be 80% better than GTX280 at most.
Besides, 2GB GDDR3 R700PRO (Radeon HD4850X2) will be also better than GTX280.
We guess AMD will release R700 products at the end of this month or at the beginning of next month.
Hardspell
80% i doubt ...............
regards
only in special apps...
i doubt the 50% average, though
would mean near perfect scaling
After seeing the 2gig 4850, I want a 4gig 4870x2. :D
After all, we have new interchip communication. We still don't know exacly what's that have to offer. I'm pretty sure, if the card, based on Super-RV770 have shared memory, the scaling would be more than 100% because of the bonus perfomance, which will come with this spec.
50% average is very feasible.
Currently, 4870 Crossfire is about:
- 30% faster than GTX280 @1600x1200 4xAA 16xAF.
- 73% faster than GTX280 @1600x1200 8xAA 16xAF.
Link
4870x2 with 2gb and interconnect link will just slaughter GTX280.
Good news! I can confirm that based on my own tests microstuttering is gone on R700!
I've tested with R700 (ATI Radeon HD 4870 X2) and R680 (ATI Radeon HD 3870 X2) in Crysis (1600x1200 and High settings). I used Fraps and enabled Frametimes logging. I recorded 2 seconds from exactly the same point in game (loaded from save game). Based on my recorded data, with ATI Radeon HD 3870 X2 frames are rendered after ~21,5 and every other frame after ~49,5 ms. With ATI Radeon HD 4870 X2 all framres are rendered after ~ 21,9 ms.
More coming soon, now I have to go and attend beachsoccer tournament! :)
Thanks a LOT Sampsa!
I think I might be right on the video RAM reasoning posted some months ago (bite me StarGazer, really)
glad i decided on an x38 over the 790i!!! i made the right choice :)
maybe i missed something in the whole micro studdering debate but the 4870X2 @ 21.9 fps would have more microstuddering than the 3870X2 @ 49.5fps.
also how much of this studdering is caused simply by the fact that Xfire doesnt even work properly for that game and if it did work properly, it wouldnt happen ??
21.9fps would look like hell and be not exactly unplayable but would absolutely exibit jitteriness from the low framerate. not microstuddering but simple video lag from low FPS.
i think the microstuddering has been taken too far by people who don't understand the difference between microstuddering and just plain 'ol low fps lag or skipping.
i personally have never seena game do this so i can not speak from experience although i do understand what microstuddering means.
i may in fact, now that i think of it, have seen microstuddering with my 8800 GTX when running a too high of overclock.. the overclock doesnt fail persay, the frame rates are way high. (60fps or higher) but the game or benchmark is jumpy... run. skip run skip all within a seconds time...
once you downclock the gpu or ram (mostly gpu) the skipping/studdering goes away.
is this similar to what you guys are coining as "micro-studdering"?
has anyone SERIOUSLY sat down and played with various variations of gpu to mem speeds in terms of overclocking, even underclocking, to see if it goes away?
it may not make any difference since each card handles 1/2 the work load, but its just a thought.
21.9ms doesn't mean 21.9FPS. It means 1000/21.9 = 46 FPS.
Every odd frame at 20ms and every even frame at 50ms puts the FRAPS framerate at 1000/35 = 28.5FPS; and puts the actual fluidity around 1000/50 = 20 FPS
Every frame at 20ms puts both the FRAPS framerate and the "real" framerate at 1000/20 = 50FPS. Believe it or not, the fluidity of 4870x2 in that certain scene is 2.5 times better than 3870x2