Results 1 to 25 of 376

Thread: hIOmon SSD Performance Monitor - Understanding desktop usage patterns.

Hybrid View

  1. #1
    Xtreme Member
    Join Date
    Jan 2007
    Location
    Dorset, UK
    Posts
    439
    Quote Originally Posted by Ao1 View Post
    Looking at the individual processes I ran none of them could utilise the max sequential read speed capability of my SSD. When running multiple processes together to try and reach the maximum sequential read speed capability of my SSD the CPU would max out.
    This does underscore the things we need to think about when looking at SSD marketing numbers for our relatively undemanding desktop systems.

    10,000 IOPs or more may scream "better!" but if the highest IOPs actually demanded by the system is in the hundreds there's no practical gain from that. Sucking in huge quantities of data faster (higher streaming speed) from disks is better, surely? But the processor then has to do something with that vast quantity of data, which places the bottleneck back on processing speed beyond a certain point.

    For server situations, high IOPs are significant since you have multiple users making multiple requests and higher streaming speeds may mean the individual requests can complete faster. That's obviously an advantage.

    But the bottom line you've demonstrated, if my skim of the results and your analysis is correct, is that the read/write latency is the only significant factor affecting speed for most single-user desktop systems that do not have specialised requirements. That should be the obvious metric we need to compare between disks. And if you think about it... that's what we've always done with storage! The mechanical disk with a lower track seek has always had the overall speed advantage.

    And yes, the pagefile result is fascinating! I don't think anyone would have expected that turning it off competely affects speed. As a guess, maybe Windows is dumping some reads speculatively into pagefile simply because it's there, wasting time if the physical memory in the system is enough to handle the total VM requests.
    Quote Originally Posted by Particle View Post
    Software patents are mostly fail wrapped in fail sprinkled with fail and sautéed in a light fail sauce.

  2. #2
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Here is an AS SSD benchmark of an OCZ Core on the left and an X25-M on the right.



    The OCZ Core has no NCQ support, so IOPS can not increase with queue depth.

    • Both drives have fast sequential read/ write speeds. (Compared to HDD).
    • Read response times are low on both drives. (Compared to HDD).

    The write response time is however very high on the OCZ Core when you start writing lots of small files randomly.

    This is due to the write penalty. The OCZ Core had a 4KB page file. The 4KB page files sit in a 512KB block. Once a 4KB page has been written, it can’t be overwritten, it must be erased first before you can write to it again. So, a 4KB write can require a 512KB block to be erased before it can be written. A delay is therefore caused by the deletion of a 512KB block before the 4KB file can be written, which is why the write access time becomes very high.

    From what I have monitored large [single files like avi's] are undertaken with only a few IOPS, hence there should be no problem copying large [single] files with the OCZ Core. Lots of small writes are however a problem.

    When the OCZ Core is hammered with 4K writes the access time reached 2.5 seconds.

    • Queue depth 1 x (1/0.002496) = Write 400 IOPs

    When the X25-M is hammered with 4K writes the access time reached 0.091ms.

    • Queue depth 1 x (1/0.000091) = 10,989 IOPS (Which is above "up to" max specs of 8,600 IOPS)

    I hope I have understood this correctly. Please jump in if I am making incorrect assumptions.
    Last edited by Ao1; 02-26-2011 at 01:51 PM.

  3. #3
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by IanB View Post
    This does underscore the things we need to think about when looking at SSD marketing numbers for our relatively undemanding desktop systems.

    10,000 IOPs or more may scream "better!" but if the highest IOPs actually demanded by the system is in the hundreds there's no practical gain from that. Sucking in huge quantities of data faster (higher streaming speed) from disks is better, surely? But the processor then has to do something with that vast quantity of data, which places the bottleneck back on processing speed beyond a certain point.

    For server situations, high IOPs are significant since you have multiple users making multiple requests and higher streaming speeds may mean the individual requests can complete faster. That's obviously an advantage.

    But the bottom line you've demonstrated, if my skim of the results and your analysis is correct, is that the read/write latency is the only significant factor affecting speed for most single-user desktop systems that do not have specialised requirements. That should be the obvious metric we need to compare between disks. And if you think about it... that's what we've always done with storage! The mechanical disk with a lower track seek has always had the overall speed advantage.

    And yes, the pagefile result is fascinating! I don't think anyone would have expected that turning it off competely affects speed. As a guess, maybe Windows is dumping some reads speculatively into pagefile simply because it's there, wasting time if the physical memory in the system is enough to handle the total VM requests.
    I am trying to muddle through this with a very limited understanding. The monitoring results are what they are. I really hope the conclusions I try to draw are not misleading.

    It would be great to get some feedback if I am looking at it incorrectly.

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •