ty now i get itrepeat (blue line) is comparable to the old iometer, easily compressible data
pseudo (red line) is harder to compress but still easily compressible
full (green line) is much harder to compress.
very interesting data here. i wonder where the real life data pattern would fall, within the pseudo range? interesting how that effects the latency quite a bit, apparently overhead from compression algorithms on the SF controller?
write data should be very interesting data indeed. are you seeing results as markedly different with writes as with reads? i would guess there would be a slightly bigger gap there.
OFF-topic (kinda)
here is an interesting tidbit i ran across on another forum, interesting take on it:
so the writes, considering the data would be correct, would be where the differences in performance would be more noticeable with the SF, or any other controller for that matter. dunno i am curious as to you guys thoughts on that article as well....Top 5 Most Frequent Drive Accesses by Type and Percentage:
-8K Write (56.35%)
-8K Read (7.60%)
-1K Write (6.10%)
-16 Write (5.79%)
-64K Read (2.49%)
Top 5 account for: 78.33% of total drive access over test period
Largest access size in top 50: 256K Read (0.44% of total)
Using Microsofts Diskmon, he simply monitored his typical computer usage in doing things such as using the internet, running applications, playing music etc. In short, he did his best to recreate the computer use of a typical user and then used the program to break down the percentage that specific disk access speeds were being utilized. In the end, it confirms something we always thought but just didn't really understand. Large sequential read and write access is utilized by the average user less than 1% of the time yet the most used method of access is smaller random write access as shown by the 8k write at over 50%.
http://thessdreview.com/forum/genera...cturers-bluff/






Reply With Quote


Bookmarks