No....none of the review sites got 2 cards for this preview.
Printable View
:cool:
I am surprised many people are not going crazy over this news worthy story....
It seems Micro-stutter might be a thing of the past. :shrug:
.
:(
i rechecked the pic @
http://plaza.fi/muropaketti/artikkel...4870-x2-r700,1
for a moment there i thought its 2x 4870x2
@SAMPSA:
No matter what you do, but when testing microstuttering (MS), you have to test at the same FPS LEVEL!! Best @ ~30fps. Adjust all settings so that all cards are @ ~30fps when you test MS.
There is NO sense in testing card A @ 30fps and card B @ 50fps only to find out that card B @ 50fps has less MS than card A.
Usually higher fps result in LESS MS, that is just my experience.
MS is most annoying when playing at ~30fps, because SLI and CF Setups are mostly there to make settings playable that were unplayable with just single gpus.
If you have MS @ 50fps, who cares? Because 50fps WITH MS still feels very playable, similar to 35fps with a single card. But when you get MS @ 30fps, it feels like 20fps- and THAT'S WHERE IT HURTS.
So PLEASE test all single and multi gpu setups @ 30fps- only then can we can find out whether CF really shows better MS handling than SLI or not.
very nice! maybe you should use a different way of graphing it though as it looks like fps per time, its not so easy to read.
so the 4800 series fixes the micro stuttering issue?
how about gtx280 sli and 8800gt sli?
Update:
Here are charts showing the difference between frametimes with ATI Radeon HD 4870 X2, 4870 CrossFireX and single 4870 in Race Driver: GRID:
http://plaza.fi/s/f/editor/images/gr...erence_ati.png
And GeForce 9800 GX2 and GeForce 9800 GX2 in single GPU mode:
http://plaza.fi/s/f/editor/images/gr...nce_nvidia.png
Still waiting input from ATI for my questions about microstuttering.
Clearly no, they appear to perform about the same. What a hollow observation on your part though, since neither appear to have MS. That's definitely an advantage over last generation hardware that...did, at least in more cases. Don't worry, I'm sure you can still find plenty of other bad things about multi-GPU. I'll help you find some if you like. :rolleyes:
I dont need to. They do that fine by themselves.
But in this case it was about the PLX chip and CF sideport. They offer no advantage compared to regular CF.
Oh I know. You missed the previous part.
Graphs are great, thanks for the hard work :)
I do find it strange however that a single 4870 seems to have larger gaps in the frametimes, its as high as 5ms between the 15th and 16th frame :shrug:
As far as the sideport showing no benifits, I doubt AMD implemented it for our health... I say wait until finalized samples and drivers reach the reviewers and a wider variety of tests are ran before you jump on it.
No argument there. That's certainly the way things appear now and I don't expect that to change. But nor do they seem to confer a disadvantage either - at least as far as performance is concerned.
If anything it simply adds some to the already great power draw and heat dissipation. Certainly a big disadvantage for some, though not much of a concern of mine.
As far as I know there aren't anything offical information told about crossfire sideport yet, but I'm hoping to hear some details about it from AMD.
My wild guess is that RV770 graphics chip actually has two interfaces:
One is for normal PCI Express 2.0 bus on motherboard which is being used with Radeon HD 4870 and 4850 graphics cards.
Second is crossfire sideport which is optimized for PLX gen 2 PEX8647 PCI Express switch and used with two GPU Gemini configurations with Radeon HD 4850 X2 ja 4870 X2 graphics cards.
AMD/ATI has been 100 % sure to produce two GPU configuration since the beginning of RV770 designing. Maybe they figured out, why not to create separate optimized interface for PLX switch which would offer some optimized benefits compared to normal PCI Express interface on motherboard.
Nothing above is based on facts, just my own 2 cents :)
ATI have been tweaking drivers so they avoid talking over the PCI bus, and via the cross fire channel. If the normal 4780's use the side port as well they won't vary very much from the 4780x2 results.
The only way to find out for sure would be to run the 4870's on a 1.1PCIe set up, as the 3870x2's PLX chip is only 1.1 you'll be able to see if that's the cause of micro stutter in this case, if 1.1 doesn't make a difference than all the 4780's are using the side port/driver tweaks.
Have you tried SLI (with separate cards) on a older mobo, or tried forcing the mobo to PCIe 1.0/1.1 and recording results?
Were some of you absent when then made you do graphs in school :rolleyes:
@ kro
If you read all the graph's header you will see the information represents ONLY 30 separate frames. The lines for each card indicate how much time passes between each frame. The smaller the difference, the smoother the image update. Simple enough? Clearly in Grid all the tested 4870s exhibit consistent render times thus theres no perceivable stuttering.
TBH I'm a bit confused about the graphs as well.
Sampsa if it's possible can you add some comments on the methodology and what we are looking for in the graphs and provide the theoretical ideal graph as reference.
Would be really usefull :)
ok let me simplify my question and again ask it.
graph shows that for single 4870 it took something like 20ms to 15ms to show frames on screen. for 4870x2 it took something like 18ms to 15ms to show frames on the screen. i think we are agreed on this as it is shown on the screen.
so fps of the single 4870 is between:
1/0.020 and 1/0.015 this means 50fps to 66fps
for 4870x2:
1/0.018 to 1/0.015 means 55 to 66fps
what i think is this gap between 4870 and 4870x2 should be much more. and i am asking why it is like this?
kromosto
This may indicate that CF does not work in GRID
Oh sorry about that didn't realize you were working the math there, I see what your getting at now.
As far as CF not working in Grid, that would be odd as the one preview shows the 4870s owning that game ( both the 4870s and even more so the X2 )
EDIT: LOL ty v_rr :D
what i think is grid is a strange game for a test :D
look what anandtech found.
4870x2 84
4870 43
4870cf 30
9800gx2 17
http://www.anandtech.com/video/showdoc.aspx?i=3354&p=7
I would like to further add that trying to find out what FPS are really doesn't matter/help. The discussion is whether or not the X2, CF or a single card have any difference in AFR stuttering.
These illustrations show this.
I am sure Sampsa will be releasing more information about resolution, FPS, etc. But the most concerning of our inquiries has been answered.
.
Hi,
Ok here is a short explanation:
First of all, I'm using Race Driver: GRID demo and 4870 CrossFire and 9800 GX2 seem to work just FINE. With 1920x1200 resolution, 4xAA and 16xAF I get 66,9 FPS with single 4870, 73,4 FPS with 4870 CF and 75,6 FPS with 4870 X2:
http://plaza.fi/s/f/editor/images/X-...3422328104.png
(Red bar keskimääräinen is average FPS)
Second, the graphs I've showed here related to microstutterin ARE NOT intended to display performance but difference between frame rendering times. I have only recorded 2 seconds of gameplay and used first 30 frames from data.
If performance is 60 FPS.. that would mean graphics card renders 120 frames in 2 seconds. Since I have only used 30 frames to analyze the results and draw the graphs, it means gameplay of half a second (0,5s, 500 ms)!!! That's not enough to draw conclusion about perfromance...
... it might not be enough to draw conclusion about microstuttering either but in my own tests 30 frames already show clear pattern with ATI Radeon HD 3870 X2 and rendering problem where every other frame is being rendered after different time than every other.
SLI is total garbage for fast paced fps games. You suffer from massive lag using the AFR rendering mode. Half of the 6 pre-rendered frames are out of date by the time they get displayed.
Indeed an important thing to note. We need a game where we see 80%+ scaling to have a educated idea ( or even high levels of IQ on a 30" LCD should give a nice indication ). I didn't realize scaling in Grid was nearly non existent. Perhaps higher levels of anti aliasing may create a spread between the single 4870 and CF?
ty for fast answers.
now i want to ask something else. now we are sure that dual gpu is working and as the test results shows we see that single card has more stutter then cf or x2 versions. i think this might be because of single 4870 is at its limits so when rendering it stutters more because some frames are more harder to render. so can it be more healthy to make test at lower resolutions or detail settings for not to push single 4870 to its limits to see what is really going on stuttering side.
We also need to be very careful when we talk about microstuttering,
If you play game at avg 15 FPS and it seems laggy, that IS NOT microstuttering. Your quality settings are just too high for your system.
If you play game at avg 30 FPS and minimum FPS drops in some situtations to below 20 FPS and game feels laggy, that IS NOT microstuttering. Your quality settings are just too high for your system.
http://plaza.fi/s/f/editor/images/grid_graph2.png
If you measure the time between frames are being rendered and draw a graph of difference of times between rendering the frames and see WOBBLING line (like yellow line in above graph), that IS microstuttering.
Correct me if I'm wrong :)
PS. Here is a good FAQ about microstuttering: http://www.hardforum.com/showthread.php?t=1317582
Yes you a right. What i want is to be sure what is going on here. Now your tests show that single gpu stutters more then a dual one. So as the stutter we are looking for shouldnt stutter more in single gpu then dual gpu this situation needs an answer. As i said below maybe single card is stuttering more because it is at its limits.but As you said this might not be shown on the graph like this so this theory might be wrong. Then i am thinking of something else. Maybe ATI did somethink about microstuttering in drivers or etc so by this adding cf maybe stuttering less then a single gpu. Also single 4870 stutters like a 9800gx2 as shown in the graph. If 9800gx2 has stuttering problem then we should say single 4870 has stuttering problem too.
I am just seeking answers for this situation.
I have already tested UT3 @ 1920x1200 no AA and 16xAF:
http://www.xtremesystems.org/forums/...&postcount=529
No issues with any of the cards I tried.
This could be helpful?
Only thing which is important is how big is variety in that yellow horizontal line (how much zig-zag it goes up and down). It's 5 milliseconds (it differs between 15 and 20 milliseconds) for 4870, which is clearly unnoticeable. Intrestingly both 4870x2 and 4870CF plays more smoothly than single 4870, but as I said thats unnoticeable.
http://i14.photobucket.com/albums/a3...erence_ati.png
You won't notice microstutter in UT3. It's there, but the framerate is too high for it to be perceptible, as well as UT3 engine already optimized for multi-gpu and phys-X. Because the engine will delay to wait for Phys-X data, stutter is almost impossible to notice.
But try out bioshock, or Gears.
I think sometimes stutter is from onboard composting engine. It takes some time for final rendered frame to go from second/third/fourth card to main card for output. The previously used TMDS chips showed this very quickly when pushed...it is like alot of time the second frame just never gets there in time...
So, what happens, is in generations previous to RV770, is that a memory control ringstop is used for crossfire communication. Should that ringstop be busy when the data comes in, ther is a delay to free it up, and we see "stutter".
Crysis makes this more pronounced because of it's already high memory usage.
So I'd agree with you totally, Sampsa, as to what is going on. And again I want to thank you for taking the time to show that it IS an issue with soem cards, and that those of us that we complaining of such weren't off our rockers. It's highly appreciated.
I will say though, i tihnk your testing methods are a bit flawed. you need to let a game load up, and then let the game idle for a few seconds before taking data. You notice "stutter" in single cards due to fraps, so giving the system some time to settle into it's task will highlight this problem a bit better, as ANY app is going to be inconsistent during the first couple of seconds.
Kinda off topic, but any explanation on why your CF / X2 scaling in GRID is so much 'different' than what AnandTech preview shows?
It seems they got close to 100% scaling with X2 compared to single. True the res they used was higher but I don't think this is just a matter of CPU bottlenecking. Your test shows only what, 10% scaling?
it's first 30 or so frames, which CANNOT indicate performance accurately.
I bet thats because using different motherboards. I have had many differences in crossfire scaling between different boards. I have seen Gigabyte boards gives most coherent results between different games. MSI and asus may be great on some game but not so consistent between different games.
One intresting thing what you find in some gigabyte mobos is extra molex socket in pcb just for crossfire use! I was little surprised and wondering what the heck is that for. :)
GRID only plays nice with Crossfire/SLI after patch 1.1. Sampsa? :)
What you say makes little sense to me, as most that changes is some internal chipset latency, which will never have more than a FPS or two impact in games. Multi-gpu this may havea slightly larger impact..maybe double..but not more than that.
The extra socket is to provide power directly to pci-e slots, eliminating extra electrical noise in the pcb. But now that cards are getting higher-draw, this socket is disappearing, as the 24-pin mobo connector went up from 20 to provide the extra power anyway.
Ok, nm...it was explained already. :p
Thanks for clearing this. But that doesn't really explain the difference between motherboards even from same manufacturer.
http://www.ocworkbench.com/2007/giga...ost-planet.jpg
lol bad results. how does 1/2 less pci-e bandwidth = double performance?(x38 vs p45, nevermind p45 TRD issues)
Why is it my own testing, i see the opposite....oh, right, my testing doesn't put money in my pocket!
Anyway different chipset is not good compare, eh? Maybe try same chipset to see difference.
theres already a new patch 1.2 out
New Race Driver GRID PATCH 1.2
Quote:
http://www.racedrivergrid.com/?territory=EnglishUK
Patch 1.1: Download (188mb)
Patch 1.2: Download (192mb)
regards
Agreed, they had mentioned that improvements were made for multi GPU in 1.1 (something I mentioned earlier in this thread). Because he didn't mention what version of Grid demo he used I suggest that he tries Grid Demo 1.1 instead. I am not sure he's aware of this but there are 2 versions of Grid demo, v1.0 and v1.1. I haven't found a demo with v1.2 as of yet. There is no mention of patch 1.2 working on the demo or not. However, a test like this should be done using the retail version (that way he would have the current patch).
Edit:
I would also like to know what drivers were used in this test, for clarity.
:shocked:
24 core GPU in 2~5 years using the new HYDRA bridge chip by Lucid, enough said!
Listen to the long interview.
.
Power and heat.
Likewise. :yepp:
Would be interested to see how much of this is fixed on the driver vs. the hardware vs. the software. The 4X series seems to have a clear advantage over its predecessor. I wonder of the GTX200 series can claim the same? Either way I am looking forward to sampsa's final review on the subject.
Me as well, did he give us a date or approximate time he may be completed with said tests?
he has no r700, had to return it.
I think its correct that only 30 frames are not enough to make a correct conclusion. Ofcourse, like you say.. You can see a pattern. But its also possible that that pattern only excists in the first second of the test.
What I think is weird, is that the 4870X2 has about the same variance in your graph as the 2x 4870 XFire. The CrossFireX port on the R700 should lower the microstuttering even more then with normal crossfire..
why? plx chip adds latency....single gpu would have pci-e bus only...so minimal data will be the same, but when heavy data is exchanged, the lower latency of sideport will let r700 shine.
Why? Cause I heard from CJ that with the r700 sideport the microstuttering would be gone.
So then I made the assumption that microstuttering still excists in the 4870 crossfire (2 cards), even though I havent heard many people complain about it.
Edit: And yes, I know assumptions can be bad. ;)
Sideport is still there on single gpu's...there is no longer ringstops for that. SO it should be the same for Crossfire via mobo pci-e, or crossfire via vga board pci-e...either way you are no longer using memory resources for multi-gpu communication, and using the side port instead. R700 excels due to lower latency due to less physical distance between the cards, as well the PLX chip should offer at least 16-lanes of pci-e 2.0 between the cards in combination with the side port, whereas some mobo's will not give the same connectivity.
there's moer issues at play in "microstutter" than anyone is really aware of, other than maybe myself and those that built the cards...and even then I doubt they are putting all that much work into it...you can only fix what's broken, you can't fix something not broken!
Cadaveca, you allude that microstutter is a not a 'broken' problem. So what you is your take on it then?
Isn't errata usually something that is broken?
You are over-simplifying. Errata is used for corrections to a program's code, it does not modify the fundamental functions of the program. A flaw, on the other hand, in the program requires a modification to the fundamental functions of the program and is what is called "broken" in layman's terms.
Perkam
Well keeping it simple is usually best :)
Broke is broke. You are just describing the method to fix it. If its a code problem, what code? And why not apply those fixes to the 3xxx series also?
I dont think many would consider the problems that the Phenom had originally as not 'broke'.
TLB = Flaw i.e. something you can fix but not without recalling the product. Microstuttering = Errata, i.e. it can be fixed with architectural updates and driver optimizations without need to recall the product.
The reason the same cannot be applied to the 3870 was because the updates to the Xfire were not available on the 3870X2 but are now on the 4870.
Perkam
Perkam,K10's TLB problem is an errata and yes it can be fixed without a need to recall a whole line(it is/was done via TLB switch in BIOS).AMD did fix it in in hardware(B3) without the penalty the software fix brings.
As for the microstuttering with multi-gpu it's all around a very complex problem like cadaveca said.
Just arguing semantics.
AFAIK,B2 Phenoms are still sold and AMD is not accepting RMA's for it.Quote:
recall ,n.:
A request by the manufacturer of a product that has been identified as defective to return it, as for necessary repairs or adjustments.
So you used a wrong word in your post(recall).
The B3 is a stepping that fixed B2's TLB problem,AMD never accepted B2 and in turn handed over B3s.
B2s work fine even without the patch and the chances you encounter the erratum(since that is an erratum,contrary to what you said),you need to run very specific multithreaded code.With the TLB patch ,you can't hit the problem even in that case.B3 has a hardware fix with no penalty at all.
Microstuttering effect is not an erratum.
As are you. Over iterations of Crossfire they have solved many issues, and what we are left with now is about as good as it can get. That start-up multi-gpu accelerator, to me, sound more like a dispatch processor...I question it's worth. I mean, it could be useful, but I need far more details than what has been given so far. They aren't even making the chips..they are just liscencing the tech!
Anyway, First TMDS chips had a big flaw. They did not provide enough bandwidth(X800/X850). The next interation they doubled up the chips(Crossfire wasn't able to do dual-link DVI at first, a hardware requirement for 2560x1600 etc, resolutions.)
So, it was still bandwidth limited, and due to it's nature, added input lag. So they moved into the gpu for the interconnect, using a ringstop for extra inter-gpu communication. This also allowed them to use Crossfire for HDR, etc...as the first method using the TMDS chips presented issues as only final frame was composited together by TMDS in most rendering modes, Dis-allowing post processing in some situations.
As we have found out, a single ringstop was not enough. Even with 16x +16x pci-e lanes to the cards, allowing 8x to card, per card, and 8x for communication, bandwidth was still at a premium.
Using a ringstop meant that resources for memory control were also used for multi-gpu...we instantly can never have 2x performance gain because of this...all teh way from 2400XT to 3870x2.
So, now we have RV770...well...details are sparce at the moment, but we know that memory control is not used for intercommunication, and that this communication now happens over the "side port". What this port is composed of, or capable of, we won't know for a bit yet.
The only reason I think I know what's going on is due to my digging into this issue far more than anyone else...I have been complaining of this main problem with multi-gpus since it started...you can look back to my posts here about the TMDS chips being an issue. I've been following this for years now...
Anyway, it may not be an errata, per se, but it is definately a hardware issue. Like errata, specific workloads trigger the problem...and like some errata, it cannot be completely fixed, without hardware changes. Rv770 is the start of this change...and I expect one more major change before I see what I think is right...of couse, I'm not in the industry, nor an engineer, like you, I'm making assumptions. And I don't mind being an ass over it.
How am I arguing semantics?
It was broke. The frame sync was bad. In order to fix it they had to make a design change. I dont care where they had to make the change, or how, they still had to change something in order to fix it.
You say it wasnt broke, but you say its a problem :rolleyes:
Problem, broke, whatever.
So to summarize your post containing mostly nothing relevant, you believe the fix to the microstutter is from the sideport?
I say it wasn't broke, as they knew the problems the design would pose before it was fully taped out. At the beginning of Crossfire, they(ATI) were making gpu's in a mini-fab, so at some point someone said "OK let's go with it". You are calling it broke becuase it didn't work...that's arguing semantics.
As such, I cannot call it broken. Not ideal, yes....it was a design decision made for "whatever reason".
And while you may think my post contains nothing useful, it does highlight exactly what they think CAUSES microstutter, and how they have decided to fix it.
I think these guys need to see a good video representation of "microstutter".
Click here then click the youtube link in cadaveca's post. http://www.xtremesystems.org/forums/...&postcount=156
I prefer to cause it micropausing still as it seems to look like a timing error to me.
Um, STEvil...As Sampsa has shown, a timing error is EXACTLY what Microstutter is all about...the timing when frames are displayed onscreen, and when they are rendered.
Sometimes it's because of the engine, because of needed geometry data for frame 2, that has to wait for frame1 to complete.
Sometimes it's becuase the workload balancing in not right...one gpu ends up doing the majority of work...
Sometimes it's becuase memory usage is high, and incoming data to be buffered in the framebuffer on the main card gets delayed...
Sometimes it's becuase of the driver trying to deal with one of the above three problems.
I saw your post on the hydra, STEvil, and had to laugh. At least you seem aware that this has been a problem since the stinky old article you linked...
Just keep in mind that FRAPS'ing game to movie adds stutter itself with
frequency = game fps - capturing fps.
And i just checked and reproduced that kind of stutter. It is particularly well seen if game fps and movie fps are close.
Off course game itself went smoothly.
nonetheless, the stuttering is clearly visible in this video. especially when he's walking straight forward. it's like the player gets stuck every now and then.
thanks for the video. never saw microstuttering "in action" :up:
Video was w/ QX9650 @ 4ghz, 2x HD3870x2's, 1280x1024 res, high details.
Also, 1680x1050, same settings, no stutter...more work for gpu, yet less stutter? Crysis is a special situation, as is GRiD.
That doesn't look like microstuttering like I see it in crysis.
To me best way to see it is start the game and murd...eliminate first 3 kore... enemies and stand in the front of the red flare and move your (or strafe) mouse from left to right. That's it. Fps goes from 10fps minimum upto 110maximum senselesly and the game feels very "stuttery", not jumpy. But in reality that minimum fps is actually the real one, not that average one. If your minimum fps is much more than that, 40 for example, you might even notice it because that 40 is actually pretty smooth. If I use single card I get about 30fps but it feels much smoother than in 45-60fps with CF enabled.
Next place where it's clearly evident is in animation sequence where Aztec is hanging in a tree. Very stuttery even fps counter says 40 average, but again it's that minimum you are actually seeing.
I must say, you guys are really selling me on the GTX280...
L7R, that's not stutter, as you describe. Stutter is when the actual framerate, and what is displayed is different. AVG framerate has no bearing in this measurement.
For example, in that video, FPS remain fairly consistent. Notice the lack of AI, and only scripted events...I purposely chose that area as I knew that performance penalties from extra rendering duties would not be the issue.
As I walk forward in the video, at the end, less geometry data is displayed, and even when the gpu does less work, the stutter is evident...when minimum framerate is increasing.
It's not as simple as just checking FPS counters to find stutter...FPS counter will tell you how many frames are rendered, but not how many are displayed. It's this difference, between rendered FPS, and displayed FPS, that we are talknig about when refering to "Microstutter".
And, Aberration, nV cards are prone to the same problems, but because thier inter-gpu communication is different, it manifests itself at different times, even completely diffferently, than what ATI does.
You can mimic microstuttering by playing a game then record it in Fraps. Any kind of i/o activity while gaming can give the impression of microstuttering. Therefore, it's source is not bound to a video card. Other variables can come into play that can result in the stutter you see on the screen. These issues are but not limited to:
-drivers
-tight dram timing/subtiming
-unoptimzed game or a game optimized for a speciic hardware
-PCIe at x16 and x4 when dual GPUs are used
-i/o operations runnning in the background (already discussed)
-processes running in the background
-Chipset timing
-vregs of the video card getting to hot (little do people know those need active cooling)
- power play (anyone remember that problem from 3800 series)
-Vista's super prefetching the first 5 minutes (or so) when Vista starts up
-etc
Now I have to ask another question. Did Sampas use the AFR-FriendlyD3D.exe trick to make sure that the cards were scaling properly when using Grid.exe? As there is no mention of what drivers were used to conduct these tests.
grid works fine after 8.6/4800 hotfix. Stutter is almost gone.
good list too...you know what's what.
i don't use fraps in any microstutter yesting, for the reason you mentioned.
I own a GTX280 and I've seen and played with a 4870. that said with how well 4870 Xfire works which has already been tested and the 4870x2 being close if not better than 4870 in Xfire I'm confident that the 4870x2 can easily smash the GTX 280. The last ATI card I owned was a 9800 pro and it was nice but it had problems. ATI has seriously come a long way in the past year and I think people need to put their old feelings about ATI behind them because the only way nvidia can beat the 4870x2 is with a new card.
The biggest problem at the moment with the GTX 280 is the high temp issues and I've already RMAed my card once and I'm stuck with yet another GTX 280 that has 105C load temps.
Why some ppl say that Vsync doesnt repair microstuttering?
I mean if Vsync in CF/SLi would be possible with tripple buffering, everything should be fine.
So where is something going wrong?
I understand it's in the hardware.
But Vsync somehow "goes around" it.
Yes/no?
Yes :up:
It goes around it in most titles in my experience with SLI. However, Vsync is handled differently in Vista and XP AFAIK, which may complicate things, and there are those games that don't support Vsync on some setups.
EDIT: My upcoming post about my own microstutter findings are temp delayed, because of downloading the FFCM8Final mod. The downloading of it complicates the last tests and give strange results. Will make a seperate thread in the computer disc forum, and 10 games will be tested in various resolutions and settings. I am only half done until now.
No, it doesn't, because then the input lag makes apps unplayable. I mentioned this in the past too...it gets so bad that in FEAR, for example, I can dominate any server when playing with a specific friend. We just have our tactics down, blah blah blah... Anyway, enabling v-sync, I'll be at the bottom of the server...no matter what. It's hard to react to something when it's already happened, and you see it after the fact...I mean..reaction time means NOTHING then.
It's a lose-lose situation. Especially when the multi-gpu target audience is what it is at the moment.
jdhann, IF they have disabled the sideport in single-gpu configurations, they aren't very smart. I don't understand how you tihnk this would make ANY sense at all...
Why would they purposely gimp the gpu, when even the heatdump is unimportant?:stick:
I mean, with Hector, anything is possible...but really now.
Some people may not notice it, but I do, obviously. Then there are situations such as in that video where ANYONE can tell...you can't broadly define the problem, unfortuantely, and this is why it's such a bugger.
Heh. I mean, it could be bad enough to cause seizures, I suppose. It's very disorienting, and for some gmaes, it means they've been shelved. Stupid 30-inch monitor is the bane of my computing experience, I tell ya.
HAHA same here my 30inch monitor means i always have to have the fastest single gpu card around, aka the gtx 280 for now =P
oh and I realize they may have improved microstutter but it was readily apparent on the 3870x2, I'm not sure I want to splurge on a r700 if it will still have the same issues, even if it's only under certain games/drivers etc.