Page 1 of 4 1234 LastLast
Results 1 to 25 of 87

Thread: IOmeter testing

  1. #1
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838

    IOmeter testing

    IOmeter profiles for download

    Random read 4KB (Testfile 1GB, default QD : 64)

    Random Read 4KB.zip
    Random Write 4KB.zip

    Workstation (8KB - 80% read, 80% random)
    Workstation pattern.zip

    Random read exp2 512B-256KB QD1-64 (aligned)
    rr_0,5_256KB_exp2_QD1_QD64_4KB_aligned.zip

    project random read linear QD scaling (GullLars)
    Random read 4KB-32KB QD 1-32
    project random read linear QD scaling.zip (includes info in zip file)

    ...

    I'll upload more profiles shortly....
    Last edited by Anvil; 08-16-2010 at 01:19 PM.
    -
    Hardware:

  2. #2
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    ** reserved for graphs **
    -
    Hardware:

  3. #3
    Registered User
    Join Date
    May 2008
    Posts
    15
    Thank you for posting these. I will give them a try later.

  4. #4
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Mr Anvil, appreciate all your hard work - this will be most usefull.

  5. #5
    Xtreme Legend
    Join Date
    Jan 2003
    Location
    Stuttgart, Germany
    Posts
    929
    dont forget to teach people to use physical drive and not the file on disk emulation layer

  6. #6
    Registered User
    Join Date
    May 2008
    Posts
    15
    Hi W1zzard, I think you are talking about using an unformatted drive for IOmeter versus letting IOmeter create the test file. What are the performance differences of doing it both ways? For myself, I am testing a current production drive so can't delete the partition, but I'm interested what the differences are.

  7. #7
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Instead of creating a new thread I've decided to use this thread.

    There is a new RC of iometer with the ability of selecting grades of random data.

    You can download the RC at Link

    Don't overwrite the old iometer as the new one still has a few annoying bugs, so unzip it to a new folder.
    Bugs that I've observed are
    If you save a new config using the RC you wont be able to load it again, can be fixed by changing the header in the config file but this is the only/main reason to keeping the old iometer.
    The speedometer crashes the GUI on my computer.

    Keep in mind that this is after all a RC and so there might be other bugs.

    Capture.PNG

    There are 3 options

    1) Repeating bytes
    Compressing a 1GB testfile using 7z (normal) results in a ~460KB file.

    2) Pseudo random
    Compressing a 1GB testfile using 7z (normal) results in a ~1.73MB file. (edit: from 1.73MB to 64MB, so far)

    update:
    Note, having tested the pseodu random setting it seems that the data generated is in fact so random that it just won't work for comparing iops.
    In order for this setting to be usable for comparing/testing SF based drives it has to produce the same level of compression over and over again and it just doesn't.

    3) Full random
    Compressing a 1GB testfile using 7z (normal) results in a ~376MB file.

    I've performed a few tests on a single SF drive, will post them a bit later today.
    Last edited by Anvil; 01-09-2011 at 03:06 PM.
    -
    Hardware:

  8. #8
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    thanks! cant wait to see your SF results with incompressible data
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  9. #9
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Well, the data isn't incompressible.

    It seems however that it takes a hit pretty quickly, with the data used the hit is approximately 20-25% on reads, I'll get back to writes later.

    To recap.
    repeat (blue line) is comparable to the old iometer, easily compressible data
    pseudo (red line) is harder to compress but still compressible. update: compression varies and the factor is not predictable
    full (green line) is much harder to compress.

    4KB
    v260_iometer_rc.JPG

    16KB
    v260_iometer_rc_16KB.JPG
    Performance hit increased to 30-40% using 16KB.

    64KB
    v260_iometer_rc_64KB.JPG
    Performance hit is now 40-50%

    I have to say that this kind of data is highly unlikely unless the data is encrypted. Testing encrypted data is something completely different.
    Last edited by Anvil; 01-09-2011 at 03:05 PM.
    -
    Hardware:

  10. #10
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    repeat (blue line) is comparable to the old iometer, easily compressible data
    pseudo (red line) is harder to compress but still easily compressible
    full (green line) is much harder to compress.
    ty now i get it
    very interesting data here. i wonder where the real life data pattern would fall, within the pseudo range? interesting how that effects the latency quite a bit, apparently overhead from compression algorithms on the SF controller?
    write data should be very interesting data indeed. are you seeing results as markedly different with writes as with reads? i would guess there would be a slightly bigger gap there.

    OFF-topic (kinda)
    here is an interesting tidbit i ran across on another forum, interesting take on it:

    Top 5 Most Frequent Drive Accesses by Type and Percentage:

    -8K Write (56.35%)
    -8K Read (7.60%)
    -1K Write (6.10%)
    -16 Write (5.79%)
    -64K Read (2.49%)
    Top 5 account for: 78.33% of total drive access over test period
    Largest access size in top 50: 256K Read (0.44% of total)

    Using Microsofts Diskmon, he simply monitored his typical computer usage in doing things such as using the internet, running applications, playing music etc. In short, he did his best to recreate the computer use of a typical user and then used the program to break down the percentage that specific disk access speeds were being utilized. In the end, it confirms something we always thought but just didn't really understand. Large sequential read and write access is utilized by the average user less than 1% of the time yet the most used method of access is smaller random write access as shown by the 8k write at over 50%.
    so the writes, considering the data would be correct, would be where the differences in performance would be more noticeable with the SF, or any other controller for that matter. dunno i am curious as to you guys thoughts on that article as well....
    http://thessdreview.com/forum/genera...cturers-bluff/
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  11. #11
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    That red 64KB curve shows how bad Sandforce is at compressing data. Anvil found that "Compressing a 1GB testfile using 7z (normal) results in a ~1.73MB file", which is an incredible 578x compression factor. And yet Sandforce evidently could not compress it at all, since it took about a 50% performance hit compared to the blue curve.

    My guess is that a lot of data on most people's SSD will be similarly hard for Sandforce to compress.

  12. #12
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Computurd View Post
    so the writes, considering the data would be correct, would be where the differences in performance would be more noticeable with the SF, or any other controller for that matter. dunno i am curious as to you guys thoughts on that article as well....
    http://thessdreview.com/forum/genera...cturers-bluff/
    I think he is drawing a heck of a lot of conclusions from one person's dataset. I wonder whether the 8KB write data making up more than 50% may be representative of most people, or whether that user was doing something that highly inflated the number.

    Actually, I am unclear on what those numbers actually represent. What exactly are they using to calculate those percentages? Is it percentage of time spent in each operation? (that would be the most useful in this context, I think) Is it percentage of total bytes for each type of operation? Is it just counting the number of I/O operations of each type and computing a percentage? If so, that would not give us a good idea of how important each is.

    If you want your computer to feel faster, I think you want to see how much time is spent waiting for various types of I/O. If an 8KB write operation completes in 200 usec (microseconds), while an 80 MB write operation takes 1 sec to complete, even if you have 100 of the 8KB writes and only one of the 80MB writes, then you will spend 0.02 sec waiting for 8KB writes, and 1 sec waiting for a single 80MB write. So counting by write operations, you would have about 99% 8KB write, but speeding up the 80MB write (less than 1% of operations count) would be something you would notice, unlike the 8KB writes.

    Now, I am not saying that my example above is typical. It probably is not typical. But it demonstrates that when profiling usage patterns, you need to be careful how you do it. I think time spent in each type of I/O operation is the best way to profile usage patterns in this context.

  13. #13
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    I think time spent in each type of I/O operation is the best way to profile usage patterns in this context.
    good point, i agree

    Is it just counting the number of I/O operations of each type and computing a percentage?
    i think. i am wondering how they came to that as well. when he gets back from CES i will shoot him a pm and see. you make excellent points, thats what i was looking for, varied viewpoints on it
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  14. #14
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Ao1 should visit this thread, he's done a lot of monitoring day to day tasks using hIOmon.

    From the top of my head, as a general rule of thumb, read/write is closer to 80/20. (just like the workstation pattern in iometer, 80%/20% 8KB read/write )

    From one of the earlier documents compiled by Microsoft on pagefileLink
    It's not typical but it's typical for one of the major tasks that runs on most pc's.

    Should the pagefile be placed on SSDs?
    Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well.
    In looking at telemetry data from thousands of traces and focusing on pagefile reads and writes, we find that
    Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1,
    Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB, and 88% less than 16 KB.
    Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size.

    In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD.
    -
    Hardware:

  15. #15
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here is what I get using the Workstation pattern Aligned test file from post #1
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	iop.png 
Views:	2715 
Size:	92.3 KB 
ID:	110964  

  16. #16
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Hi Ao1

    As expected, the results using the Intel is not affected by compression.
    I'll perform a workstation test tomorrow on the SF drive.

    I've also performed a few write tests but I'll have to create the charts for them, performance hit is high but it's still faster than most SSDs

    They have mixed the explanations for pseudo and repeatable afaiks.
    I also wonder if the random data is created equally on all computers, random is after all random and if it's not created equally the compression factor would not be the same and one can't really compare results. (repeatable is ok, pseudo and full are the ones to look out for)

    Do you have any figures for read vs writes, block sizes,... on day to day tasks from you hiomon testing?
    -
    Hardware:

  17. #17
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Hi Anvil, I'll leave hIOmon running for a week to get a good average and will post some stats when finished. 80% read/ 20% writes sounds about right overall, but I suspect it will be different if you look at the device level only as a lot of small reads/ writes seem to get addressed without going to the device. I'll compare both anyway.

  18. #18
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by nbarsotti View Post
    Hi W1zzard, I think you are talking about using an unformatted drive for IOmeter versus letting IOmeter create the test file. What are the performance differences of doing it both ways? For myself, I am testing a current production drive so can't delete the partition, but I'm interested what the differences are.
    Physical drives are displayed with a blue disk icon. Blue disk are shown only if they contain nothing but free space (no defined partitions). Disk workers access physical drives by writing direct to the raw disk, so you are measuring the performance of the raw disk only.

    Anvil,

    I'm still puzzled by the what the aligned performance test is supposed to reflect. According to the Iometer manual:

    12.9 Align I/Os On
    The Align I/Os On control group specifies the alignment of each I/O on the disk, shown in the Alignment field (default Sector Boundaries). If the value of this field is n bytes, every I/O will begin at a multiple of n bytes from the beginning of the disk. You can select any value from 1 byte to 1023 MB + 1023 KB + 1023 bytes, but the specified value must be a multiple of the disk’s sector size. Entering the value 0 or selecting the Sector Boundaries radio button causes I/Os to be aligned on sector boundaries. (This value is ignored by network workers.)

    Note: If the Alignment field is set to a value other than Sector Boundaries and the Size value is not a multiple of the Alignment value, sequential I/Os will not be truly sequential. For example, if the Size is 3KB and the Alignment is 2KB, each sequential 3KB I/O will be followed by a 1KB “hole” before the beginning of the following I/O.


    I can't imagine that the above is something that would be likely to occur in real life?

  19. #19
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here is what I get using the Workstation pattern (non aligned test file) from post #1. Again its very similar between repeat, pseudo & random so I've only posted the repeat test.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	not aligned.png 
Views:	2651 
Size:	71.0 KB 
ID:	110986  

  20. #20
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Ao1

    The example would and will take place using Windows XP and other old OSes.
    Vista, W7, OSX are all natively designed for 4KB alignment, XP was not.

    To make a long story short, iometer is by default aligned at 512B when what we really want is 4KB alignment.
    4KB aligned would have been a better title for the charts but I'm thinking of 512B as not being aligned

    A few links of interest that explains it well are
    Anand on WDs Adcanced Format transition
    Anands State of the Union article was the first to address this in a test

    edit:
    Although the X25 is designed for 4KB pages like all current SSDs, Intel optimized for 512B as well so for the Intels it doesn't make that much of a difference.
    Other SSDs like the C300 is not optimized for 512B at all which is clearly seen in the State of the Union article at Anandtech.
    Last edited by Anvil; 01-09-2011 at 04:35 AM.
    -
    Hardware:

  21. #21
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Workstation pattern results using the V2 60GB on the ICH

    Repeating bytes : 24405 iops / ~200MB/s
    Pseudo random : 21387 iops / 175MB/s
    Full random : 21467 iops / ~176MB/s

    Pseudo and Full random are almost identical just like in the random read test.

    The Vertex 2 is awesome at workstation pattern, I'll check how the C300 performs.

    edit:
    I don't have a single C300 available right now so I tested 2R0 C300 64GB on the ICH (using a 5GB testfile as opposed to 1GB on the Vertex 2 tests)
    Repeating bytes : 25856 iops / 211MB/s

    workstation.JPG
    Last edited by Anvil; 01-09-2011 at 11:01 AM. Reason: Added chart
    -
    Hardware:

  22. #22
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Vertex 2 60GB 4KB random write

    v260_iometer_rc_4KB_write.JPG

    I've included 2R0 C300 64GB. (my boot drive)

    At QD1, performence is equal whether the data is easily compressible or not.
    -
    Hardware:

  23. #23
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    On the iodrive there is no difference whether the data is random or not (as expected). Used pseudorandom, this is workstation results:

    Aligned: 61295 iops
    Unaligned: 53706 iops

    Aligned with QD=1: 20996 iops

  24. #24
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Thats awesome

    Maybe I should make a chart with more QDs?

    I'll include your result in the Workstation chart, looks like I have to make a raid of a few Vertex 2s

    workstation_2.JPG
    Last edited by Anvil; 01-09-2011 at 12:15 PM.
    -
    Hardware:

  25. #25
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by johnw View Post
    That red 64KB curve shows how bad Sandforce is at compressing data. Anvil found that "Compressing a 1GB testfile using 7z (normal) results in a ~1.73MB file", which is an incredible 578x compression factor.
    I've been doing some more tests using the pseudo random data generator and it seems there is more to it.
    The compression factor varies, the generator is not suitable for testing/comparing SF based drives.

    Using the pseudo random generator the compressed size of the file has so far produced 1.73MB up to 64MB 7z files. (still using normal compression)

    Repeating bytes and Full random looks like the only options that makes sense for testing SF based drives.
    -
    Hardware:

Page 1 of 4 1234 LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •