Page 6 of 16 FirstFirst ... 3456789 ... LastLast
Results 126 to 150 of 376

Thread: hIOmon SSD Performance Monitor - Understanding desktop usage patterns.

  1. #126
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Anvil View Post
    Regarding my HDD vs SSD bootup metrics
    The F3 boots in about 21-22 seconds (using boottimer) and so the period to monitor could easily have been set to 60 seconds (or less), I don't think that would have changed anything, that is unless there were some random scheduled activities that started during the monitoring period. (i.e. windows update, AV scan, ...)
    Like you've already mentioned, it's all down to when the monitoring starts.
    Both the SSD and the HDD are capable of reading/writing TBs of data during the 2 minute monitoring period so that's a moot point wrt to comparing total i/o during bootup.
    Looking at the idle/busy time percentages confirms that most of the time it's just idling during the 2 minute period.
    (and that the idle part is in favour of the SSDs )
    I found with HDD that there is a BIG difference between Windows first appearing and being fully loaded. With HDD you can hear the drive clunking away well after Windows first appears and if I made the test period less than 2 minutes I could not get a comparable data load.

    Not related but when I used hard raid with cache it appeared that Windows reports cache speed. It reports that data is transferred when in fact the data is still being transferred from cache to the hard drive. That is one benefit of HDD; you can listen to what is going on.

    I'm really looking forward to seeing your analysis with VM's
    Last edited by Ao1; 11-05-2010 at 01:52 AM.

  2. #127
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here I run Winbootinfo using the system image from my previous monitoring and compare it to a new OS install with less apps loaded.

    Winbootinfo reports less read data than hIOmon - ~337MB vs 420MB.

    I'm not really worried about this because what I did with hIOmon was comparable in terms of data transfer between SSD and HDD, which was my primary objective.

    What this does show however is that the apps you have loaded make a huge difference to boot times.

    The CPU graph was way too long to post due to the time it took the HDD to load, but interestingly the CPU was maxed out a lot more over a much longer duration.


  3. #128
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I noticed the boot analysis app in the C300 review, it looked OK
    I'll give it a try and I might also follow up on overtheres comment about the hIOmon bootup monitoring.
    I thought I was already using the correct method for monitoring bootup using hIOmon

    My install on the F3 was about 10 days old at the time of testing so it's a pretty fresh/clean install compared to your setup.

    edit:

    I've downloaded the app, will try later today.
    Last edited by Anvil; 11-05-2010 at 02:54 AM.
    -
    Hardware:

  4. #129
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Looking at Lsdmeasap's result I'd guess it was done on a fresh install; only 52MB read data. I used to think it was fragmentation that slowed down HDD boot times over time, but it seems that its more related to the increase in extra files that get loaded over time. That doesn't seem to make much difference with SSD because they are so fast in comparison.



    Edit:

    hIOmon provides a much more sophisticated boot analysis tool to the way I under took the comparison. I was not so interested in the boot process but the difference in performance with the same workload. In that context I think what I did was valid as it managed to replicate a very similar work load for comparative analysis between HDD/ SSD and SSD raid 0.
    Last edited by Ao1; 11-05-2010 at 03:14 AM.

  5. #130
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    AMD rig, Samsung F3, 1090T

    According to WinBootInfo the system boots in 35,7 sec

    Enabling all cores in MSConfig shaves off ~3 seconds
    bt_2_all_cores.PNG
    -
    Hardware:

  6. #131
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    VMWare hIOmon testing.

    Setup
    X58-UD7, Ci7 920@4, 12GB RAM, boot drive 2R0 C300 64GB
    Controllers : ICH, PERC 6/i, LSI 9260-8i
    Drives used for testing VMs
    2R0 Intel G2 160GB G2 (LSI)
    4R5 Samsung F3 1TB (PERC)
    WD20EARS (ICH)
    Vertex 2 60GB (ICH)

    VM used : W7 Enterprise, a development VM using 3 drives, a total of 23,5GB

    The test consists of starting the VM, loading the development environment and performing a complete build of an application (~1.1M codelines), then shutdown the VM.

    hIOmon is set for capturing every 8 minutes. (using the Presentation Client)

    The results are somewhat close to what I expected.
    I'll compile a spreadsheet for the most interesting metrics.

    For now, here are the screenshots.

    vm_boot_build_shutdown_ICH_WD20EARS_4.PNG vm_boot_build_shutdown_perc_F3_4R5_1.PNG

    vm_boot_build_shutdown_ICH_V260GB_2.PNG vm_boot_build_shutdown_9260_Intel_2.PNG

    Summary
    hiomon_summary.PNG

    A note about the 1.2798s max response time on the Vertex 2
    The response time is due to TRIM operations logged in the Control I/O section of hIOmon, refer to read/write I/O for max responsetime during normal operations.
    Last edited by Anvil; 11-07-2010 at 10:54 AM.
    -
    Hardware:

  7. #132
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    You’ve got to hand it to ICH when it comes to SSD. It’s standing its ground with a single SSD against a hard raid array with 512MB DDRII cache.

  8. #133
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Ao1,
    I've uploaded a summary.

    I knew the Vertex 2 would be very close to the LSI array, as long as the V2 is not in steady state it simply rocks
    I don't have any SF drives that are in steady state right now, I've recently cleaned all of them.

    Some comments about the runs.
    I think the numbers speak for themselves.
    QD never gets to build up on SSDs vs HDDs, the rest is as expected.
    The Busy time is a great metric, it clearly shows the time spent on I/O.

    The HDDs are simply terrible compared to any SSD I've got.
    Loading a VM using an SSD is a joy compared to any HDD setup, whether it's a single HDD or an array, SSDs are in a different league.

    The Vertex 2 and the LSI 9260 setup feels very close in terms of responsiveness, however this is running 1 VM only.
    I normally run 3-4 VMs simultaneously, the difference between running 1 VM vs 2-3 VMs on the ICH is huge compared to running the same number of VMs on the LSI or Areca.
    -
    Hardware:

  9. #134
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Hi Anvil,

    Great job on your VMware hIOmon testing and on your reporting of the result!

    One quick observation on the results that you posted:

    Within the summary table, the "Totals I/O resp Max" for the "Vertex2 60GB" is shown as "1.2798s".

    This is technically correct since there was a control I/O operation with the "Vertex2 60GB" that took 1.2798563 seconds as observed by the hIOmon software.

    I assume that you used the "SSDPerfAnalysisFS" script to configure the hIOmon software. This script configures the hIOmon software to collect I/O operation performance metrics related to the "TRIM" control I/O operation.

    As a result, the metrics shown within the hIOmon Presentation Client summary displays under the "Control I/O Operations" section reflect the TRIM control I/O operations as observed by the hIOmon software. (Please note that there are display options available that enable additional TRIM-specific metrics to be shown within the hIOmon Presentation Client display).

    In any case, the observations of the "Control" I/O operations are included within the "Total I/O Operations" section shown near the bottom of the hIOmon Presentation "summary" display.

    And consequently the "Vertex2 60GB" has a "1.2798s" maximum response time (since the "totals" include the control I/O operations together with the read and the write I/O operations).

    So technically the "1.2798s" is correct for the "totals" maximum response time for the "Vertex2 60GB".

    However from the perspective of only read and write I/O operations, the maximum response time for the "Vertex2 60GB" is 32.9417ms (as also shown within your summary table).

    Thanks again for your time and efforts on this!
    Last edited by overthere; 11-07-2010 at 08:59 AM.

  10. #135
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Hi overthere,

    Yes, I used the SSDPerfAnalysisFS on all drives, including HDDs.

    I assumed that the Control I/O was related to TRIM and was going to ask you about that later.
    In any case, it's relevant and has to be included, TRIM does interfere with other i/o. (read slowdown)
    In case I do more testing using the same VM it will be reflected on any TRIM capable SSD, as long as it's used in single mode.

    I believe Ao1 disabled TRIM on his W7 install and rather did the cleaning using the SSD Toolbox on the Intels.

    Where do I enable the additional TRIM metrics?
    -
    Hardware:

  11. #136
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Hi overthere,

    If you use the fsutil behavior query disabledeletenotify command to disable the OS from issuing TRIM commands would this also disable hIOmon from registering TRIM control I/O operations?

    I guess is you are in raid configuration (and can’t pass the TRIM command) it would be better to disable it (?)

  12. #137
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Anvil View Post
    Hi overthere,

    I believe Ao1 disabled TRIM on his W7 install and rather did the cleaning using the SSD Toolbox on the Intels.
    I left TRIM on and didn't use the toolbox as this is how I normally use my system.

    Anvil, I don't doubt that 2 or more VM's would start to slow down on one SSD, but I intrigued to see what load can be thrown at a single drive before it starts to slow down and where the slow down occurs. Would it be loads of work to run two then three, maybe fours VM's? Does the CPU have to work a lot harder on a single SSD when compared to hard raid?

    It would also be great to see how steady state performance differs from fresh when comparing an X25-M, Vertex 2 & C300. I know I'm a cheeky bu**er

    Here's a quick side by side of my observations from the boot up monitoring.


  13. #138
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by Anvil View Post
    Yes, I used the SSDPerfAnalysisFS on all drives, including HDDs.

    I assumed that the Control I/O was related to TRIM and was going to ask you about that later.
    In any case, it's relevant and has to be included, TRIM does interfere with other i/o. (read slowdown)
    In case I do more testing using the same VM it will be reflected on any TRIM capable SSD, as long as it's used in single mode.

    I believe Ao1 disabled TRIM on his W7 install and rather did the cleaning using the SSD Toolbox on the Intels.

    Where do I enable the additional TRIM metrics?
    I can see where you would want to include the TRIM control I/O operation metrics.

    My main concern in my prior post was that some folks might be puzzled as to the "1.2798s" value shown for the "Vertex2 60GB" within the "Totals I/O resp Max" entry in your summary table.

    I thought that it might be helpful to provide a brief explanation as to where this value came from (and that it was not some bogus value).

    Anyway, to enable the display of the additional TRIM metrics within the hIOmon Presentation Client "summary" display, please perform the following steps:

    1) Down in the lefthand corner at the bottom of the hIOmon Presentation Client "summary" display you will find two buttons: "Table View" and "Options".

    2) Click upon the "Options" button (located immediately to the right of the "Table Views" option), which will cause a new hIOmon "Display List Options" window to be displayed.

    3) Near the bottom of this "Display List Options" window you will see a section called "I/O Summary metric types to be displayed:"

    There is a checkbox for "Control I/O Operations", under which there will be an additional option called "Include MDSA metrics".

    Enable/select the "Include MDSA metrics" checkbox so that the additional metrics related to "Manage Data Set Attributes (MDSA)" control I/O operations (which include TRIM commands) will also be displayed within the hIOmon Presentation Client "summary" displays.

  14. #139
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Ao1

    I'll see what I can do about the steady-state performance, it takes a while to get the SF drive into steady state (compression) and to provoke steady state on the SF would also invoke DuraWrite.

    The issue with comparing more than 1 VM (on any number of drives) is to get comparable results, not that easy to achieve for such a test, I might give it a try though.

    I'm almost done with the WD5000BEKT (2.5" 7200rpm drive), I'll post the metrics shortly.
    -
    Hardware:

  15. #140
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by overthere View Post
    I can see where you would want to include the TRIM control I/O operation metrics.

    My main concern in my prior post was that some folks might be puzzled as to the "1.2798s" value shown for the "Vertex2 60GB" within the "Totals I/O resp Max" entry in your summary table.
    ...
    I'll make a note about the 1.2s response time

    I found the MDSA option, I'll do a similar test on the V2 with the option enabled.
    -
    Hardware:

  16. #141
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by Ao1 View Post
    Hi overthere,

    If you use the fsutil behavior query disabledeletenotify command to disable the OS from issuing TRIM commands would this also disable hIOmon from registering TRIM control I/O operations?

    I guess is you are in raid configuration (and can’t pass the TRIM command) it would be better to disable it (?)
    Hi Ao1,

    The short answer is that the use of the "fsutil" utility does not change the configuration of the hIOmon software.

    Consequently, if the hIOmon software is configured to capture I/O operation performance metrics for MDSA control I/O operations (e.g., TRIM commands), then the hIOmon software will continue to collect such metrics if it observes the TRIM commands.

    But if fsutil is used to disable the OS from issuing TRIM commands, then the hIOmon software should not, of course, observe any TRIM commands.

    Regarding whether to use fsutil in RAID configurations so as to disable the issuance of TRIM commands by the OS, leaving TRIM enabled is basically negligible.

    That is, the OS will issue a single TRIM command to the physical device (if TRIM commands are enabled to be issued by the OS). If this TRIM control I/O operation is not successfully completed by the physical device, then the OS will not subsequently issue any other TRIM commands to the physical device.

    You can see this effect in Anvil's hIOmon Presentation Client "summary" display excerpts in those cases where a HDD/RAID was used.

  17. #142
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I've added the "fastest?" 2.5" non-hybrid HDD to the mix.

    vm_boot_build_shutdown_ICH_WD500BEKT_2.PNG

    It does pretty well compared to the 5400rpm EARS drive.

    A new summary is on it's way.
    -
    Hardware:

  18. #143
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by Anvil View Post
    I'll make a note about the 1.2s response time

    I found the MDSA option, I'll do a similar test on the V2 with the option enabled.
    Sounds good.

    Just to be clear to other folks that might be interested in the use the "MDSA" display option provided by the hIOmon Presentation Client "summary" display:

    This option only pertains to the display of the MDSA-related metrics within the hIOmon Presentation Client "summary" metrics display.

    That is, this option does not change the collection (by the hIOmon software) of the MDSA-related I/O operation performance metrics (e.g., those associated with TRIM commands).

    Basically, this MDSA display option is provided to help reduce the overall size of the hIOmon Presentation Client "summary" metrics display.

    In other words, the MDSA-related metrics can be displayed only when you want them included within the hIOmon Presentation Client "summary" metrics display.

    Hopefully I haven't made things confusing.

  19. #144
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    New summary that includes the WD5000BEKT

    hiomon_summary_2.PNG

    I've started testing Seagate Momentus XT.
    (the last drive to be tested this time)
    -
    Hardware:

  20. #145
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Anvil, what I was interested in seeing is if the degradation that can be seen in a benchmark has any impact on real life applications that are not pushing the drive to its limits. In other words whilst degradation can be quantified easily with a benchmark can hIOmon quantify the impact in something like a single VM. I suspect average response times would increase but I think overall the impact would be minimal. i.e nowhere near as scarry as a benchmark might indicate.

    Thanks for all the testing by the way. It’s nice to be able to see some comparative results.

    Looking forward to seeing the Momentus XT

  21. #146
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Ao1,

    It is quite noticeable on the Vertex 2, once settled in (steady state) one doesn't need to run a benchmark to notice the change.
    I have to say that I haven't experienced this using the latest firmware 1.23, my latest experience with this phenomenon was using FW 1.10.
    (I do run 1.23 on all SF drives)

    Momentus XT coming up now
    Last edited by Anvil; 11-07-2010 at 01:01 PM. Reason: rephrase regarding FW 1.23
    -
    Hardware:

  22. #147
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Momentus XT (the so called Hybrid drive)

    I've included both the 1st and 2nd run as there is a remarkable change in "Busy time" and the fact that 2nd runs is what this drive is "all" about.

    vm_boot_build_shutdown_ICH_MOMENTUS_XT500_1.PNG vm_boot_build_shutdown_ICH_MOMENTUS_XT500_2.PNG

    Summary
    hiomon_summary_3.PNG
    Last edited by Anvil; 11-07-2010 at 12:27 PM.
    -
    Hardware:

  23. #148
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    overthere,

    This is the Control I/O w/MSDA metrics

    I expect this could be the snapshot that's deleted.

    vm_boot_build_shutdown_ICH_V260GB_3_trim.PNG
    -
    Hardware:

  24. #149
    Xtreme Member
    Join Date
    May 2010
    Posts
    112
    Quote Originally Posted by Anvil View Post
    This is the Control I/O w/MSDA metrics
    For those curious about what these MDSA-related metrics reflect:

    • There were 5 MDSA control I/O operations that requested a TRIM

    • The total combined number of "Data Set Ranges (DSR)" specified by these 5 TRIM control I/O operations was 19.

      A DSR identifies the starting offset (i.e., essentially the starting block address for the particular set of blocks/sectors) along with the overall combined length of the blocks/sectors to be "trimmed".

    • The total combined length of the DSRs specified by these 5 TRIM commands was 1,611,800,576 bytes.

    • The smallest DSR length observed amongst all of these DSRs was 4,096 bytes

    • The largest DSR length observed amongst all of these DSRs was 1,610,612,736 bytes

    • The lowest starting address (in bytes) observed amongst all of these DSRs was 1,232,896

    • The highest starting address (in bytes) observed amongst all of these DSRs was 3,180,806,144

    • The minimum total number of DSRs specified by a single TRIM control I/O operation was 1

    • The maximum total number of DSRs specified by a single TRIM control I/O operation was 8

    • The minimum total combined lengths of the DSRs specified by a single TRIM control I/O operation was 53,248 bytes

    • The maximum total combined lengths of the DSRs specified by a single TRIM control I/O operation was 1,610,862,592 bytes

  25. #150
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    VMWare creates a 1.5GB snapshot (matches the size of the largest DSR) + 4 LCK files @ 4KB each.

    I added the Seagate 7200.12 1TB and the WD VelociRaptor (WD VR300) to the chart. Unless someone wants to have a look at the screenshots I won't upload the screenshots.

    New summary
    hiomon_summary_4.PNG
    -
    Hardware:

Page 6 of 16 FirstFirst ... 3456789 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •