I personally didn't see any in GRID with the 1.2 Patch and 8.6 Hotfix with 2x 4850's. Didn't see any in Crysis, or GRAW2...
Printable View
Yes that's exactly what tried to explain. Look, I was speaking about minimum and maximum being all over the place. But average is displayed smoothly. I'm sorry I forgot to mention I'm using crysis built in framerate counter which shows min, average and max -all at the same time! console command: r_displayinfo 1
That's how I first noticed it just before crysis was released and BEFORE this microstuttering issue gets widely observed. I was looking at that built in counter or FRAPS and realized I'm not getting nowhere near what they are saying but possibly what that minimum fps counter shows. At least it feels just as low as it shows. When I use cf in crysis I get about 10fps min, 40avg, and even over 100fps max and all the counters -min+avg+max are all over the place (average is although somewhat steady as it should be). But surely it is not 40fps. It feels like 10fps at times (looking at that smoking red flare at the beach, for example and moving mouse of course).
Weird thing is in oblivion when using AA+AF and texturepacks, sometimes in heavily forrested areas fps counter (built-in) shows even 55fps but it's like 15. This could be the case just like in your video. It's like jumpy or sudden stops. But when the fps goes over 60 and stays there it's gone, instantly! I had to give up using 4xAA+16xAF just because of that. 2xedge detect + 4xAF helps avoiding that fairly well.
Why GRID suddently became the microstuttering benchmark tool?
I'm wondering same. I haven't really noticed it on GRID. Actually that's the most stutter-free game I have right now. Might be because fps is pretty much always over 60 now when crossfire is actually working. It would be nice if someone has noticed that too why microstuttering is harder (or impossible) to notice when fps goes over 60? Is it because when frames are rendered even completely out of time (ie both cards get a frame almost at the same time) there's still 30fps left. Which can be considered smooth.
Because the large textures are memory-intensive, as are alot of newer titles(as will be most DX10 titles) and it highlights one of the biggest causes of "stutter"...using memory links for inter-gpu communication. It also highlights issues with running physics simulations on the gpu in the same aspect.
In the end, if RV770 is not prone to these issues, it highlights the benefit of the new sideport vs. other technologies, which would be a design win for AMD. How gpu's communicate with each other in rendering tasks is essential in the future, as silicon process problems and limits come into play...altohugh many are still denying it, multi-cored processing is coming, whether we like it or not, and if your cores are fantastic, but thier communication sucks, your cores suck too.
FPS seems smooth over 60 becuase any frames over 60 get discarded anyway(not really, but almost), so this is what is ideal for LCDs. but as teh "refresh rate" of LCD increases, performance within those margins becomes more important as well...120hz LCDs means 60FPS is not enough so that "smooth" experience.
There are times, however, that even this is not enough FPS because of other issues that have already been mentioned in this thread, but at that point drivers can fix most issues, but this also requires work for each individual app, as the driver must schedule the workload properly.
Understanding the importnace of workload balancing also has me aware of how un-importnat, really, this Hydra tech is, as GPUs already do this anyway...the only benefit would be using different display devices, but even then issues come up.
This is why this is so important now, and has taken as back burner in the past...we are at a critical stage in this regard...LCD tech cannot make the jump if the display devices cannot keep up.
First answer from AMD:
http://plaza.fi/s/f/editor/images/X-...1331120313.jpg
2nd generation PCI Express bridge design of course means that PLX chip has been upgraded to support PCI Express 2.0 standard.
"Next generation GPU inverconnect for improved scaling" means more bandwidth for data transfers both between cards and between system memory.
The bandwidth between both GPUs has been bumped from 6 GB/s to 20 GB/s (According to to HardOCP preview).
Waiting for more information :)
Could it be threading issues in GRID?
As I understand it there are a lot of reasons why a game will “stutter” or lag
One scenario could be threadproblems.
Bottlenecks will slow down performance. Bottlenecks are created if something gets overheated with too much work and cannot handle the task.
Here are some thoughts about C2Q and threading.
C2Q is two C2D without internal communication. They communicate through the FSB. Also Intel design is done to use the FSB as little as possible because they have some latency issues (the computer is at its slowest). It is vital that the cache is used for C2D or C2Q to be speedy and that means that the hit rate needs to be very high.
If you take the C2Q which has two C2D (I call them A and B here). One thread is positioned on one core in C2D-A, and another thread is located on core in C2D-B. Then if the thread on C2D-A is moved to the other core on C2D-B it means that it will need to re-fetch all data that was stored on C2D-A L2 cache and all this data needs to go through the FSB that isn’t that fast (high latency). When the thread has moved the hit rate for cache data goes down until data has been processed. This traffic in the FSB also need to handle I/O graphics and that may slow it down more, maybe switching in the Northbridge takes extra time. So for a fraction(?) of a second the C2Q is slowed until data in cache is refilled and the FSB gets up speed again because there isn’t any queue of data that needs to be sent or retrieved.
If this is the case than microstuttering (or FSB-stuttering)could be a problem that is related to games that scales to a lot of cores and use some memory.
All processors are more or less sensitive for switching treads to other cores (of course it depends on how much memory they are using). I think that Vista is NUMA aware and Phenom supports that. That is a technique to add some intelligence on where threads are placed in order to optimize memory latency. I know that there are some who say that NUMA doesn’t get any advantages but it is very hard to measure the performance gain with it because it isn’t that often the no-NUMA is hurt with issues like the one described. But this microstuttering (or I/O problem in this case) could be something that is solved with NUMA. I think that Nehalem is going to have NUMA and I don’t think that Intel put it there for fun.
Also both Nehalem and AMD Phenom isn’t as sensitive for un-optimized threading as C2D and C2Q.
So, are you Sampsa ready to start building another testing rig over a Phenom? ;)
I think we need some proof if it's a threading problem before all crossfire users switch to Intel-free rigs.
And more efficient drivers.
A lot of problems I am seeing stem from the different testing Operating Systems. Some use XP, some Vista, some use XP 64, and others use Vista 64.. 4 completely different sets of drivers, with different kernels and different everything. Do I need to list the differences?
And one of the problems I am noticing is people are posting and not reading...
PS. At XS, I think when posting a review link, it must become mandatory for the poster to list the operating system used in the title. Otherwise, that review cannot be compared to any other with any degree of confidence.
Sounds like a pure bandwidth issue to me.
Got any proof of this, or is it just speculation because you prefer AMD?
Anyone out there with a Phenom on 3870x2?
And then what does this have to do with the topic? It seems like it came completely from left field. Is this the whole slower but smoother concept?
So to understand what is causing delays in the GPU, you posted an untested and unproved theory on a CPU?
Proof would come from testing the hypothesis.
Fail.
And also at http://www.hardforum.com/showthread.php?p=1032761583
For such an untested theory, sure is being plastered everywhere.
Though his comments were a tad off topic, there is no reason to go down the aggressive route.
Until the card is officially released and tested, all we are doing is speculating and there are many forms of speculation, none of which require your approval before they can be posted.
If you don't like a post, respond to it constructively or don't respond to it at all.
I, for one, think that the xfire scaling issues in Crysis are a seperate issue seeing as 4870X2 in xfire scales pretty well in some games but doesn't in others, meaning its probably an application-based issue.
Perkam
It's fustrating, AMD are like the proverbial willy (pee pee) teasers, they tempt us with this new found 2 Terrabyte powerhouse, show us just how much power it gives to a lot of current games, yet make us wait...for nearly a month.
We know so little about the R700, yet also so much...
Rolll on with the next set of (p)reviews with more mature drivers and board revision!
John
Any other games besides Grid?
Crysis,Cod4,UT3 maybe?
Reading VR-Zone's review, they say microstuttering still exists.
http://www.vr-zone.com/articles/Grap...0/5935-14.htmlQuote:
ATI's drivers still needs some polishing for CrossFire setups. During the course of our testing, we noticed strange and random artifacts appearing in our Unreal Tournament 3 benchmark, as well as micro-stutters which occur very randomly as well in other softwares.
As much as speculation doesnt require my approval, me posting whether I approve or not doesnt need yours :up:
Its one thing to speculate. Its another when its obvioulsy biased, and then cut/paste to every microstutter thread he can find. The problem with it, is usually a hypothesis is based on limited observation and then tested further. This has shown to have absolutely no thread of truth to it at all.
For one this doesnt explain then why there isnt microstutter on single GPU solutions. It doesnt explain microstutter on C2D. And it doesnt explain why there is so far no microstutter on the 4870, not only x2 but also CF isnt showing it so far. Sure there isnt enough testing yet, but there is more done on just the x2 than his 'theory'.
I don't know. What I do know is how these processors work internally and knowing that you can draw some conclusions. I have tried to find reviews and tests in order to look for some specific issues but it is totally impossible to find. They never test I/O when reviews are done for graphics. They are always using Intel testing video cards. I need some different cpu in the test to see variance. Checking that variance you could draw some conclusions and if the match then you have proof. When they test CPU they are always testing on low res
Some patterns have been noticed looking at members on forums but those are very hard to use for proof. Also it is difficult to know if one review site is neutral.
Also this is very specific to gaming. People are really sensitive when you question I/O in gaming. Other type of software that is very memory intensive and/or I/O intensive it’s no problem at all talking about it. Xeon scales like :banana::banana::banana::banana: on servers when they run in memory databases as an example.
These very fast video cards that are out now should really be very heavy on I/O. If there also is threading, maybe the game is running on three cores and all cores are sending and getting information from external hardware, also there is thread synchronization.
If it is a problem on servers it really should be a problem on gaming also. The latency in the FSB goes up if there is a lot of traffic. I think that the thread scheduler doesn’t switch threads during I/O operations (depends on how they are done).
Actually, I go around pwning myths and biased discussion all over, so it's not a matter of need, but more a matter of when you'll get it if you go around threads making useless posts :cool:
Reply to the actual post and care less if it is cut and paste. If you are doing that you're going off topic. Sampsa has already shown that the micro-stuttering is better than the 9800GX2, and with the absence of a GTX 2x0X2 due to thermal and die size AND power issues, we can say already that the 4870X2 has brought us a legitimate improvement.
I think he wants us to wait for official launch before we can make any conclusions, which is his right :)
Perkam
I can see there is still a huge debate over this.
While I suppose these results from GRID aren't enough to debunk microstuttering on the 4800 series, they do look promising.
The next time you play Grid run fraps. Hit F9 to record your race (either while racing or watching the replay). You should notice the "micro" stuttering everyone is eluding to. I still don't believe it exist solely as a dual GPU problem. Because that example alone can create "micro" stuttering just by looking at the screen (among other scenarios mentioned in post 185).
Here is one test that could indicate that there is some problems in the total package for Intel when games are running on high res and/or is advanced in graphics.
In these tests one Phenom 9600 (2.3 GHz, 2MB L2 cache) wins over one C2Q (3,2 GHz, 12MB L2 cache)
http://www.overclockersclub.com/revi...l_q9450/14.htm
http://www.overclockersclub.com/revi...l_q9450/13.htm
http://www.overclockersclub.com/revi...el_q9450/8.htm
It seems that games that are heavy on I/O is a problem on Intel. If it is something wrong with the motherboard or other problems there I don’t know. But that is the only test I have found where they compare AMD with Intel on high res with PCI Express 2.0. Video Card used is 8800GT
lol again this strange review... :rofl:
look at the cod4 test and compare the 1680x1050 with the 1920x1200 (ocend q9450). There is no chance in hell that with the same single (stock) 8800gt the framerate stays the same when you increase resolution, not even a gazillion more mhz on the cpu will make your game run faster, when the limintig factor is the gpu...
Same with WiC but now on the phenom rig -> 1027x768 28fps -> 1280x1024 28fps -> 1680x1050 28fps -> 1920x1200 28fps....
is it?
The thing is that I believe this test is more creditable because it isn’t perfect. There is a lot going on in gaming and there could be other things on the computer that effects the game, of course it could be the human factor. But the thing is that the scores seems to be honest, they don’t care if it looks strange.
The big problem is that there is so very hard to find test that somehow is checking this problem. You need to compare one slow and one fast processor on high res with one very fast video card. The speed can’t be hindered by the processor speed or the video card. Best would be to compare one slow AMD Phenom ( 9550) with a good motherboard, now that these 790GX is coming that would be very interesting to compare, with one very high clocked Intel and a very fast Video Card. Then you have one solution that has very strong I/O but the processor isn’t that fast. Another solution that has very strong processor but I/O and memory transfers isn’t that fast. Checking the behavior on these computers would say a lot on why things happens.
There is another scenario that is similar to that if you compare one single core processor with a dual core. If you have one single core processor is running at 3 GHz and one dual core running at 1.5 GHz. You are using these for normal work as word processing, surfing, chatting e.t.c. What processor would get the most pleasant experience?
If you test these then the 3 GHz will win big over dual @ 1.5 GHz
If the 3 GHz runs some threads and one thread is set to higher priority and that thread does some demanding operations, then what happens to the other threads? They just stop.
If one thread crashes on then 3 GHz in a loop with higher priority, then the other threads also will be veeery slow.
What happens if a game is multithreaded or there are some other threads in the background of a game? That thread is set to high priority and does some memory intensive thing. Remember that the only path C2D and C2Q has to communicate with other hardware is the Front Side Bus. That traffic is probably controlled by hardware but it could be that two threads is located on the same core, other threads could also be waiting for one thread to get ready.
and thats the problem with that review, they are using a 8800gt, and this should mean they should see serious drops in res above 1650x1050, yet there are some total inconsistent numbers (there are a lot more in that review, then the few i have pointed out.)
most games arn't really memory bandwidth hungry. Thats why you only see a very low increases in fps (even on the amd platform) when you use faster ram.
I also play games and do some crunshing (rosetta) at the same time, but i never noticed any slowdowns while i play games, not on SC or crysis.
As said in aother thread, this review needs some deeper digging and some crosschecking.
No but some are very I/O hungry and the problem here is GRID. That game seems to scale well using threading and it is also intensive in graphics. And what we are discussing here is situations that could slow the game temporarily.
If the problem was in the GPU then it would be more logical that it showed up I more games. New games that will be out and if they are heavy on I/O, is using more threads will probably confirm or prove it false.
any source for that claims?
i just played a bit with the system monitor in xp, which allowes you to log I/O for a certain app. Crysis has avg ~100 I/O per second and ~1,7mb/s transferred (peak of ~50mb/s and 6000 I/O while loading) (level was onslaught, which has quite some action going on. ;) )
i dont know for grid, but i doubt it will be significantly higher.
I don’t think that you only have ~100 I/O requests per second to the GPU playing Crysis ;) You have A LOT MORE.
Also it isn’t the size that is important; it is the number of requests to memory and GPU that is important.
If you fetch 1000 bytes (not much) but you get one byte on each request. If you have a cache miss for each request and say that it takes about 250 clocks to get it. Then you have 1000 * 250 = 250 000 clocks getting 1000 bytes. Compare this with finding data in the L2 cache (15 clocks). 1000 * 15 = 15 000 clocks. If the Front Side Bus is working with another thread or cache is used for another application the hit rate will go down and as you can see in this example the speed is will go down.
If one thread has all these misses in the cache and slows down and there are another thread that is dependent on work to be ready then that thread will also be slower
Now it will not be that ineffective in games but normally it is small requests and it isn’t the size that matters, it is the latency for each request. OC the FSB will improve performance for the whole computer on Intel because latency goes down.
Crysis is almost one single threaded game and the problem with scaling may not show up there. If just one thread manages the I/O and/or it also will be the main memory user then I don’t think it will be a problem.
If you are using a lot of threads and all threads are sending or getting information than complex situations could appear. Not often but in a world that executes ~2 000 000 000 operations each second it could be often if we think about it. Also if there are other applications in the computer that is working the game will have to share resources and that slows it down.
On AMD the situation is different because it has Hypertransport and that manages I/O alone, it doesn’t need to compete with memory traffic. Also AMD Phenom can read and write data at the same time.
I dont notice Micro Stuttering on my 4850 crossfire setup, i did notice it before with SLi, but not with these new cards.
Interesting to hear GAR - it looks like along with the preliminary results from Sampsa that ATI did indeed do something either driver wise or hardware wise to fix issues
Thank you very much GAR! I've been thinking if it still could affect on 4850 (you prove this wrong, thanks) or with different hardware configurations (chipsets,OS's). It would be nice to hear more comments from 48x0 users. Seeing microstuttering once, it's pretty easy notice afterwards. Thanks Crytek for this great microstuttering test called Crysis.
I was thinking the same thing it seems that AMD CPU fanboys which are really dedicated to the cause (those who refuse to leaving in droves like pilgrims to the promised land of Core2) always rake that review out.
I think..
They have a point, it's nothing to do with the CPU, but more to do with the chipsets I belive the AMD chipsets have a faster or a more favourable PCI-E 2.0 controller, either that or Intel X38 and X48 boards require a BIOS update to ensure that there are no bottlenecks at higher resolutions
I vaguely remember reading something along the lines (slightly off topic) that Intel ICH only offers six PCI Express lanes, while AMD relies on its north bridges for PCIe links, but that might not be true.
Anyway I think ATi have done something to reduce microstutter (judging by the results in this thread), however it looks like it might still be around...only subjectively though...or very hard to detect.
John
That is not Microstuttering that is hardware not upto the spec to run an application, Microstutter is best described as fluid motion with the odd bit of slow motion (not jerky) and then back to fluid motion, a bit like playing a game @ 100FPS that pauses and then hits you @ 30fps then shooting back up to 100FPS all in less than a second. (and entirely at random).
John
In Furmark I notice these stutters even at 100+ FPS using a single GPU solution. Could it all just be bad programming by the game devs, or flawed engines to starts with? Does setting the maximum number of frames to be pre-rendered higher help with Nvidia cards? Interesting stuff.
Many many questions, but that's only a good thing in my opinion :)
:confused: the northbridge of x38/x48 has 32 pcie 2.0 lanes and the ICH(R)9 southbridge offers another 4 lanes.
For P45 it is 16 pcie 2.0 lanes and 6 on the ICH10.
i think the main factor, why there is less stuttering is, that the cards are faster. Make fun of it, but i think thats the biggest reason why we dont see MS so offten now. MS is mostly noticed in the 30-40 fps regoin, in most games now CF can push past this limit -> less MS.
bottlenecks everywhere; always will be if you put settings high enuf, particularly with badly coded games :p:
but hey i can play tf2 :hehe:...and grid., but only 16x12 really with a :banana::banana::banana::banana:load of AA
You exactly described Crysis with crossfire. Average FPS is pretty good 40-60, max is over 100fps but it still feels like 15-20. Even 10fps at times. How YOU see it on YOUR system is neccessary nothing to do with everybody elses systems. Crysis is absolutely the best microstuttering testing tool what I've found. Actually Microstuttering is somewhat continuous when you are playing, if your average fps is 40fps actually you should see/feel stuttering 20 times per second in a worst case. Not FastForwad-stop-fastforward or anything like that. That sound more like system fault than MS, to me. But every system seems to behave differently. I have seen MS in three differerent crossfire system and every one stuttered like framerate were splitted to half or more.Quote:
That is not Microstuttering that is hardware not upto the spec to run an application, Microstutter is best described as fluid motion with the odd bit of slow motion (not jerky) and then back to fluid motion, a bit like playing a game @ 100FPS that pauses and then hits you @ 30fps then shooting back up to 100FPS all in less than a second. (and entirely at random).
L7R, I assume all 3 of them were not 4800+ CF systems?
I believe the lucklaster texturing and Z fillrate (and also to an extent internal differences) made the 2900/3800 in crossfire not too rosy for over-demanding stuff (stuff that you can't play smoothly on 1 card)
Watching for more developments
This text might be of some interest here
Studies of Threading Successes in Popular PC Games and Engines
is the final review ready for publishing? eagerly awaiting results. thanks.
Microstuttering is in every game with every config.
Why in Sampsa's test we can't see MS?
Because with 48x0 CF Grid is CPU bottlenecked. When there is CPU bottleneck, MS dissappears.
Also Vsync helps that only when you're above 60 FPS average. When you're under that it's even worse.
I'll test it with X2 when I'll get one :P
Conclusions made after few weeks of test with 4 different CFs, Phenom, QX9770 and 16 games.
Article with nice graphs soon :P
ftp://bf2.xweb.org/Movie.wmv
video showing fps in grid and encoding one video at the same time
So is MS essentially considered "fixed" (ie disappeared/smoothed out) with current gen cards or not? Or is it just CPU bottlenecking?
cpu bottleneck ing that fixes m/s ? :confused:Quote:
When there is CPU bottleneck, MS dissappears.
I'm going on PCGH's HD4870X2 preview, hence my points.
Bedlamite claims he has the card and will test it. Fact is, even if one person tests one set of games on one system. His/her results may be different from another person's experience with M/S.
The buyer must decide how much of an effect it is going to have on his/her experience and though no one can come out and claim the HD4870X2 is useless because of M/S, the fact remains that if ATi solved the xfire scaling issues alone by the time it launches, you're looking at one MONSTER of a card for $499.
Perkam
CPU bottleneck somehow "synchronizes" frames, because GPUs have to wait for next frame.
Wors scenario of MS is in CoJ, which is completly unplayable below 70 FPS.
GRID is completly unplayable below 40 FPS and at 50 FPS you can see it starts to slow down.
Crysis is quite OK at 40 FPS with CF because of motion blur.
Today I'll end tests of 3850 CF and in next week I'll type this article.
I have no idea how does it look like in 4870X2, because I still don't have one, but I hope I'll have one soon for tests ;)
One alternative explanation of why the game could be smoother if the processor works harder could be how games handle threading.
I think that most games that are using threads have one render-thread. This thread is probably the most important also, it may have higher priority or it could even be that the game reserves one core for this thread. If the game prioritize one thread to one core like this then the game has more control over how the game behaves on the processor and how the GPU is fed with data for rendering. There is some drawbacks using this technique, it scales well to two cores but more cores (threads) that are used and it will not use the power of the processor if the processor has more than two cores.
If the game doesn’t take control of how the scheduling of the threads are done and how they are located. Then this task is handled by the operating system. The operating system will place threads for rules it has. Maybe it sees one core that isn’t working and another core has two threads, then it might decide to move one thread to that core. I don’t know the logics on how the thread scheduler works. I think that vista has a much improved scheduler compared to XP though. It could also be that it places two render threads on the same core and if one core will handle two threads it will of course slow down the game.
If you are running one C2Q and one thread is moved from one C2D to the other C2D (C2Q is two C2D glued together and the cache for each C2D doesn’t have any communication). The cache data is invalidated and the core needs to get data from memory, this will stress the FSB and the computer slows down for a fraction of a second.
It could be that if the processor doesn’t work as hard and it is a quad, then threads are moved around more than if the processor is working harder.
This is just a speculation.
Well your speculation hasn't got much to do with the source for ms and I think that most games that are multithreaded are a bit more complicated in their way of threading.
About ms, some people explained it like this:
Let's say that the GPU takes 30 milliseconds to render a frame and the CPU is fast enough to provide the GPU with a new frame to render every 5 milliseconds. (just some random numbers) This would on a single GPU system result in around, while a CFX/SLI system would be able to deliver 66 fps when fully optimized. When using AFR though, frame number 1 will have taken 35 ms. to render and frame 2 would have been there 5 ms. later. Frame 3 would then again take 30 ms. while frame 4 will be there only 5 ms. later. That's because GPU0 starts rendering the first frame it gets from the CPU while GPU1 gets the second one, there is only a 5 ms. difference between those render starts but they both need 30 ms. to render the frame. This would be the case with AFR at least and this could have been a nice explanation for the problem, but it seems that other rendering techniques also sometimes suffer from the same problem and this makes the problem seem more complicated. This inconsistent rendering when using AFR can be solved by simply starting to render the second frame a little later, in this case 15 ms after the first render start and this is what NVIDIA and AMD tell the GPUs to do in some (if not most) cases.
When in this setup you would replace the CPU with one that is capable of only passing on a new frame to render every 15-20 ms. then you would solve this problem by simply inducing a CPU bottleneck.
This too is all a bit speculation as I'm still not completely sure what the reasons for micro stuttering are.
so.... there was no conclusion in the original post.. just graphs. Is microstuttering solved or not? From quick glance 4870x2 looks the same as 4870... so microstuttering = solved??
Crysis is ok at 40 FPS with SLI, but with a single card 40 FPS Crysis means "really smooth", at least to me.
My 8800GT SLI was unplayable below 40FPS in Crysis. However a single 4870, which gives somewhat lower FPS's, is much, much more playable.
I wish I hadn't sold my 8800GT's before I got my 4870. I would illustrate the difference perfectly with a Handycam.
annihilat0r, I have really nice graph with 4850CF frame logs and with GTX280.
Both cards show average 40 FPS in place where logs were taken.
It's quite interesting to see what happens there :)
And yes, crysis at 40 MGPUFPS is "ok" but 40 FPS from single card is much smoother.
When I'll have a bit time today I'll post it here.
http://img339.imageshack.us/img339/6333/micromi8.th.jpg
100 frames taken from Crysis, Very High, 1680x1050.
One color is for 4850 CF and 2nd is for GTX280.
Both shows 40 FPS average.
Guess which one is for CF and which is for single ;)
^^^One in the red cant be GTX 280, I have Gtx 260 and frame rates dont sync like that at all.
I have run a few tests in Crysis, Oblivion and COD4 with 4870 crossfire and the frames were rendered evenly in all 3 cases using fraps...
I truely believe the results can change from system to system, because in all my testing there was no evidence of micro stutter....
What resolution?
CoD 4 with 4870 CF is allmost all the time CPU bottlenecked so it's wrong title for such test ;)
Try HL2: Episode 2, or stalker. These titles have really good engines, which are allmost never CPU bottlenecked. It's most visible in these games.
Or just try Call of Juarez and enjoy nice megastuttering ;)
You have q6600. In resolutions below 1920x1200 you can be CPU bottlenecked allmost all the time with your CF so you won't notice stuttering.
Yup, something I've implied a while back. MS has been talked about for over 8 months now (that I'm aware of) and no one has provided any rudimentary evidence to show MS is specific to either current gen single card, SLI/CF or X2 when playing PC games (induced stutter, graphs and fraps don't count). So far many are starting to see that stutter (in one form or another) isn't video card specific but can be contributed by other means therefore not necessarily MS.
Some, will see the red flag in the length of time that has elapsed between the word "microstutter" 1st being used until today. Meaning, after all this time there is still no rudimentary, concrete proof of it's existence specifically current gen X2, CF/SLI and specific brand of video card to date.
Eh...
You wan't me to describe that none of these points, which you've mentioned in your post, are possible in my tests?
If you want to I can do that.
But before you'll talk that MS does't exist, just take some good 30' display, get CoJ, get 4870 CF, run a bit in grass and tell me that you can't notice that something is wrong.
Or just take two 3870 with CoJ and some decen't res (1680x1050).
In this case it's a specific problem between engine, driver or something else. Same grass stutter can be found in Dirt using the Ego engine which was improved upon in Grid. This is not an example of normal PC gaming but a example of a specific moment in the game which can lead to other possible explanations instead of video card's arch.
So I was so ulucky that I've found 16 badly coded games made on most popular engines (UE3, Source, ID tech 4, Cry and few others).
I'm talking about CoJ because it's most visible there but It's not only visible there -.-
Like I've already said, you won't notice MS if you're CPU bottlenecked and I was bottlenecked quite often with qx9770@4GHz with 4870 CF...
A few minutes ago it was COJ using grass. Which was debunked because "your stuttering problem" can be the result of other issues. Now it's a whopping 16 other vague games now? Read your own post:
In any case this only reinforces what others are starting to see or already know. Stutter is not specifically a video card or video cards related problem. Your PC setup (combination of hardware and/or software used) can contribute to stutter. Besides, the amount of time that has elapsed and no real proof should draw a red flag by now.
how many of you are actually using TRUE 16x/16x motherboards!??????
16x/8x is not gunna make smooth gameplay and neither is 16x/4x
most intel chip boards are 16x/8x or 8x/8x even though they say the baord is 16x its not.
you will definately start to see some issues if you try and CF on a 16x/8x motheboard. the 2nd slot is bottlenecking you.
(another high five to nvidia and sli for no microstuddering!!!)
Sorry.
Mobos used by me:
Asus P5E3 Premium (X48)
Gigabyte something-790FX-DQ6.
EVGA 790 Ultra
So all were 16x PCIex 2.0
So Eastcoasthandle, you say that I've had 3 benchmarking setups with "specific combination of hardware/software" causing MS?
GG.
bedlamite, any chance you know when your finished with your testing and can show results?
Monday will be last day of my test (single 3850 left for benchings, and few GTX280 SLi tests for comparison) and then I'll need few days to gather that alltogether.
Unfortunately in first article there won't be too many SLi or CF with Phenom results, because I'm time limited and I have only one QX9770, so it'll be in 99% about CF (because of upcomming 4870X2 release).
But right after 4870X2 release and tests, I'll do same thing with SLi (9600GT, 8800GT, 9800GTX, GTX260, GTX280 planned).
IMO problem is big and it's not a matter of who is and who is not right, but most important thing is to find out whats going on and we'll try to use all our AMD contats to get their attention...
i aint never seen nvidia studder.... ever..
granted i'm the first too admit i havent been around as many sli setups as alot of you have but i have never seen an sli setup studder...
i've seen sli just go to hell and get worse FPS than a single card but thats not MS thats the drivers and the game not allowing sli to work properly.
Lestat.
MS is hard to notice for most of the time and in real gaming.
In normal gaming, when you shoot, run, "turn on" physics etc you're much more often CPU bottlenecked.
Even if you're not, you're probably for most of the time above 40 FPS, where you can't notice MS with your eyes, especialy that most off ppl feel smoothness pretty much diffferent and you can quite easy get used to MS effect.
Especialy if you use < 24" display you have high FPS and nice CPU bottleneck for most of the time with SLi/CF setup.
Conclusion is, most off ppl won't ever notice something like stuttering.
SLI also uses AFR so it will inherently have the ability to MS. This has nothing to do with nvidia or ATI but with how they do the rendering
And personally, you havent seen stutter and most people haven't either - a lot of the MS talk has been hyped up and blown out or proportion considering SLI and CF have been around a long long time and no one made it a big issue before this year
WTH ?? You weren't around for the 9800GX2, then ?!Quote:
Originally Posted by Lestat
Perkam
Might as well point out what each axis means. For me these are just meaningless some numbers and drawings.
Microstuttering, not microstuddering.
Would be interesting to see your results soon, bedlamite. But I have one question, is the CPU really that much of a bottleneck? From CPU reviews, we can see a non OCed mid-range CPU such as the E8400 puts out almost the same fps as a high end oced QXxxx... At semi-decent resolutions anyways. So does the CPU affect microstuttering? I dont know, but I think not
I have really never noticed MS cuz i'm using only one 8800card but i remember that in need for speed pro street it feelt like the scenes at the beggining of a race micro stuttered. I remeber that a lot of people complaind about it. It got better for me after a while after driver updates but has anyone tested multi gfx sys with nfs pro street? Not a very good game but i must say that it was a bit more demanding than grid.
Cooper, I don't think that's hard to find out.
This graph is quite simmilar to any other connected with MS.
But if you want to, 0-100 is number of frame. 0-45 is time after this frame is showed.
But well, you don't have to belive me if you don't want to.
Catalyst. In most of situations E8xxx and QX9xxx at same clock will have same scores. AFAIK number of cores > 2 makes some difference only for UT3 and Assassin's Creeed.
what measurements the time graph is shown in? microseconds?
tbh i never looked into this thing.
Actualy in milliseconds.
This graph is made to show how big is delta for MGPU.
As you can see, for single card delta is not bigger than 5 ms and usualy it's about 2-3 ms.
For CF delta is much bigger, up to 30 ms, and about 20 ms average.
Both setups showed 40 FPS average, but when you run 40 FPS with MGPU you can feel that something is not going OK there...
So as the first HD 4870X2 is for pre-order in this country now;
Would getting 2x HD 4870's solve MS over a HD 4870X2 or wont it make any difference?
And say I get MS problems with any of the above options, can I 'solve' it by simply setting more and more AA? Or actually less AA?
Im on a 24" screen and want as always everything super high details etc. Never experienced MS (I think) but Im not really willing to try it out;)
:ROTF:Quote:
So I was so ulucky that I've found 16 badly coded games made on most popular engines
time for AFR3Quote:
thanks to AFR.
or AFR2+:lol:
with advanced synchroniser engine :p:
wish i could help:D
.Quote:
the only thing good about the 9800gx2 is that I was able to step up to a GTX 280
ono nv wins again but they wont get the biggest 3dmark score..er i mean wantage.
...but the vsync or capped framerate thing: does that really help in some situations with this m/s?...or not? with m ul t ig p u ?
see, you can use cf with no m/s, so go and buy a 4870x2 :p:Quote:
I have run a few tests in Crysis, Oblivion and COD4 with 4870 crossfire and the frames were rendered evenly in all 3 cases using fraps...
I truely believe the results can change from system to system, because in all my testing there was no evidence of micro stutter....
First off, could someone answer my post above please:p:.
Second, wasnt MS originally because of the PLX chip offering not too much bandwith/speed for both GPU's on the HD 3870X2?
I mean, with the HD 4870X2 they put in a 48 PCIe lane chip, 16 for each GPU and 16 for the PCIe slot and it was said this was enough. Say this is indeed enough, then there would be no MS right?
All these graphs are worrying, although we've never ever tested it before and so this might have excisted for quite a while now. If you see over 100 frames with an average of 40fps these fluctations, it's only in 2.5 secs you go up and down, is this even noticable:confused:.
The first HD 4870X2's are for pre-order now for only 416 Euro thus far and this is hell of a lot cheaper than 2x HD 4870 1GB cards (which go for 240~260/card). Im tempted to order it, but if MS is really bad this might not be a good choice. However, if I understood certain posts it shows that any mGPU setup suffers from MS so buying two HD 4870 1GB cards would still suffer from it... right?
And most important, if the PLX chip on the HD 4870X2 is not a bottleneck anymore, would eventually MS simply be fixed by drivers?
Thx in advance!:up: