Page 3 of 3 FirstFirst 123
Results 51 to 68 of 68

Thread: Random and sequential speeds with IOmeter

  1. #51
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I've been busy lately so I haven't had time to follow up on the results and graphs.
    I'll try again next weekend.

    It would have been great fun trying 4 or 5 G2 160GB units using the ICH but the results are quite exceptional as they are.
    -
    Hardware:

  2. #52
    Banned
    Join Date
    May 2009
    Posts
    676
    never mind
    Last edited by onex; 02-18-2010 at 06:35 AM.

  3. #53
    Registered User
    Join Date
    Dec 2009
    Posts
    36
    Quote Originally Posted by onex View Post
    dh41400
    can't see u'r point with testing a short-stroked HDD vs a partitioned non short-stroked one through IOMeter,
    there obviously shouldn't be any difference at all,
    the HDD works the same as when it short-stroked to 30GB or u'r checking the last 30GB in it through IOMeter,
    there is no difference at all at arm movement, this test is definitely superfluous.
    Actually it's not.

    The main reason was to see if a second fully-filled partition would make any impact on the performance of the first one.

    It doesn't.

    Well it's easy to spot it now that the tests are done

    lol, because it's interesting .
    Actually disk load (# of Outstanding I/Os- IOMeter Loads) is quite different as opposed to the common knowledge.

    This is thoroughly explained at the StorageReview and it can be easily monitored with perfmon.exe:



    As seen, the avarage disk load is far from single-digit numbers ... I'm mostly doing an average of 30-50 Outstanding IOPs (a.k.a. Queue depth).


    BTW for the other subject - IOmeter results are VERY dependent on the number of workers and type of caching involved ...
    Last edited by dh41400; 02-19-2010 at 07:16 AM.

  4. #54
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by dh41400 View Post
    As seen, the avarage disk load is far from single-digit numbers ... I'm mostly doing an average of 30-50 Outstanding IOPs (a.k.a. Queue depth).
    You'll have to divide those 30-50 outstanding IOPs by the scale, in your case 100.
    -
    Hardware:

  5. #55
    Banned
    Join Date
    May 2009
    Posts
    676
    Hypothesis was correct ... no significant difference between "short-stroking" and partitioning first xy GB of the drive.

    There was no difference even if the second part (partition) was filled with data, the first partition was running flawlessly - almost identical to the "stroked" drive (and I was a little surprised with this).
    that's exactly what been said, maybe the word "last" should've been changed to "first", yet it shouldn't matter except from burst speeds.

    The main reason was to see if a second fully-filled partition would make any impact on the performance of the first one.
    you mean, while working only with the first one?
    they're not supposed to be connected, the HDD reads data from it separately then from the second/third etc.
    the head is moving only at the specified partition which is operative, not paying any attention to the other parts of the disk.
    the only thing that could make any difference is reading the metadata which defines each partition which is supposedly placed at the MBR, or the volume boot record for a non-partitioned device.
    so it seems at least.

    Actually disk load (# of Outstanding I/Os- IOMeter Loads) is quite different as opposed to the common knowledge.
    this is right and also explained very nicely at ixbt labs article on IOMeter which everyone using it should read,
    Now a few words on the # of Outstanding I/Os parameter. If you set it to 1, then with the 100% Percent Random/Sequential Distribution we in fact measure a random access time. Value 4 corresponds to a load of an elementary applications like Windows Calculator. According to StorageReview, in average on real applications this parameter takes 30-50. The value more than 100 corresponds to high disc load (e.g. in case of defragmentation). According to it they suggest to take the following 5 values for this parameter.
    lol, it point to storage-review article u brought as well.

    BTW for the other subject - IOmeter results are VERY dependent on the number of workers and type of caching involved ...
    the 9211 tiltevros used is an HBA, and doesn't use any caching at all, that's part of the difference between using a RC and an HBA, the other bench is taken from ICH10R 4*X25-V's by anvil,
    and u should really have a close look at these results (brought by gullars) if u havn't, they're very detailed and magnificently put.


    thanks for bringing that SW and article, it seems very very useful,
    i'll go have a look at storage review input .
    edit -
    LOL I'M SUCH A NOOB
    Last edited by onex; 02-19-2010 at 12:20 PM.

  6. #56
    Banned
    Join Date
    May 2009
    Posts
    676
    Quote Originally Posted by Anvil View Post
    You'll have to divide those 30-50 outstanding IOPs by the scale, in your case 100.
    how can that be anvil?
    that means that some of the IO's are "fragmented" i.e 0.3IOs 0.2IOs etc..
    that means up to 10 sec for one IO from that graph..

    here's windows performance viewer, google earth (start&end) and some FF (middle),
    (note the scale for the current IO's):

    Last edited by onex; 02-19-2010 at 12:12 PM.

  7. #57
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    onex,

    It's sampled every second and so it's not an exact number.
    Start CDM and click on QD 4 or QD32 (or set qd in iometer) and watch the graph in perfmon.
    -
    Hardware:

  8. #58
    Banned
    Join Date
    May 2009
    Posts
    676
    cool .

  9. #59
    Banned
    Join Date
    May 2009
    Posts
    676
    thanks anvil,

    2-128 exponential with IOMeter -> permon @ 1 scale:



    proved .

  10. #60
    Banned
    Join Date
    May 2009
    Posts
    676
    p.s dh41400,
    can see u'r point in testing it, and why u were surprised with the results,
    it strangely means actually that short-stroking fuzz is nothing and has no actual meaning at all...

    this is so silly, how did they not think about that at toms hardware?
    losing 80% of a 450 SAS drive when u can partition it with the same impact?

    that's so idiotic it's actually hard to believe .

  11. #61
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    you know that storage review stuff is so off that it isnt even funny then in regards to queue depth. i jsut dont understand how they could be so wrong about that for so long and nobody called them out on it
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  12. #62
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by Computurd View Post
    you know that storage review stuff is so off that it isnt even funny then in regards to queue depth. i jsut dont understand how they could be so wrong about that for so long and nobody called them out on it
    Their article is so ancient that perhaps during those times and considering the speeds of the storage devices at that time, what they had written was correct. Who knows. It definitely was not meant for SSDs.

  13. #63
    Registered User
    Join Date
    Dec 2009
    Posts
    36
    I was measuring the performance of the WD Black HDD and considering all IOmeter benchmarks (mine and from other reviewers) it should get around 120-160 IOPs (depending on type and size of data).

    Then put on some disk load and watch the perfmons results and it really starts to "stutter" above ~120 Queue lenghts (original scale), so the article on the StorageReview may be obsolete but the part concerning disk load measurement is quite accurate.

    It is interesting that nobody noticed this before and that was taken for granted that a <20 Queue depth is heavy load ... and I'm not just talking about XS but worldwide forums as well :/
    Last edited by dh41400; 02-19-2010 at 07:46 PM.

  14. #64
    Banned
    Join Date
    May 2009
    Posts
    676
    no offense dh41400, yet saying 120-160 IOPs on IOMeter without specifying the size of data is talking quite vague...

    when u talk about perfmon as seen by the examples here (and anvil's sayings), you should specify whether it is default setting (which are different for certain measurements)
    i.e - current queue length is 10 by default, others are 100.

    if it is set to 100, as seen here, you should actually divide the results to fit a 1 scale.
    it sounds odd, yet measurements shows IOMeter's QD of 120 to show 120 at perfmon @ scale 1, and running some software, shows up to 200 and down to 0 at scale 100.
    this means, as seen up there ^^ avg. QD of 0.1-0.2 to 2.5 per second,
    (as said here) it is measuring QD every 1 second so graphs aren't that accurate.

    @1 hertz - not sure how different it is between an HDD to an SSD from that perspective, this should be checked,
    after all, SSD are transfering the same data faster, yet it has more channels then an HDD...

    p.s - CT, did a defrag test here (160G HDD), QD was minimal as well,
    at SR they said it goes up to 120 ...
    Last edited by onex; 02-20-2010 at 10:33 AM.

  15. #65
    Registered User
    Join Date
    Dec 2009
    Posts
    36
    Non taken

    The disk performance is dependent on the type of data and the settings in IOmeter. I've been mostly using the default Workstation pattern with a default number of workers and the results were in limits mentioned above (dependant on the partition measured).

    I was just looking at these graphs and found another more important item:



    Split IO/s are definitely the thing I've been referring to ... it's that limit on magnetic HDDs that just keeps them in "stutter mode" on heavy load.

    There is a conection between the QD and these IOPs and can be clearly seen on the graph.

    It is all explained pretty good on this site.

    We are making progress (finally) and I'd like to do this again on a average SSD ... the results should be interesting and can maybe reveal a magic formula for measuring performance of a average home user config
    Last edited by dh41400; 02-20-2010 at 10:25 AM.

  16. #66
    Banned
    Join Date
    May 2009
    Posts
    676
    as it been said, 120 by storage review, seems too much,
    they talk about 2QD at u'r article @ normal activity.
    intel SSD's (M&E) are supposed to take up to 32 tasks simultaneously.

    this all data needs to be reprocessed and retested,
    it could be that this HDD here suffers from bottlenecks.
    it should be all checked with an SSD.

    remark - note again the 100 scale (perfmon -> test pattern (avg. queue depth etc.) properties -> 3rd tab -> scale -> change to 1.
    split I/O's are IO's that are split as they're being requested, due to defragmented medium or if (so said) a program asks for data that is too large to transfer at a single request i.e 1MB of
    data while the system is transferring request sizes at 4KB (not sure about it, yet should be something like that).
    u should see it best happening while defraging the drive.

  17. #67
    Registered User
    Join Date
    Dec 2009
    Posts
    36
    I've seen what has been said but I'm also talking about results that were reproduced on my own setup (Workstation pattern) and the numbers are pretty realsitic 120+ IOPs (without any mistake, and almost in every configuration).

    When we are talking about measuring real average home user configuration, the first thing is the type of files and the universal setup (at least for an system partition).

    This is why all this results have to be "projected" into real-life performance and scaling which I've been doing A LOT in last year or so.

    So the real goal is not to take a bunch of disks and just run the same IOmeter pattern, but also make sure that type of pattern represents real-life load and data transfer on the system partition (other partitions are mostly used for low-demand data such as big files and multimedia).

    There is a whole bunch of tests and patters out there but very few represents the realistic situation and user based experience.

    That's the real problem

  18. #68
    Banned
    Join Date
    May 2009
    Posts
    676
    I've seen what has been said but I'm also talking about results that were reproduced on my own setup (Workstation pattern) and the numbers are pretty realsitic 120+ IOPs (without any mistake, and almost in every configuration).
    that sounds strange,
    QD of 32 sound reasonable at high demand, yet still, perfmon shows other results.

    the first thing is the type of files and the universal setup
    ntfs goes 4KB to 64KB depends on the partition size, file matter as well indeed,
    loading POP (dec 08) brings read QD up to 10
    FF w 45 tabs loaded doesn't go over 2.5-3 at most.
    note that prince is quite a light game, X3 has loaded with 5 outstanding IO's top.

    aside from that, there doesn't seem to be any program advanced or professional enough to measure every aspect of the disk realtime except from perfmon, which is so ugly and deteriorated with these graphs and 16 colors,
    that it is just ridiculous.
    crysis or other performance app's might ask for more.

    That's the real problem
    i'd say, in-serious developers or at the least, thoughtlessness of MS - no support for deeper system inspection,
    it'd be very disappointing to see that WIN 7 PRO doesn't support any decent tool either, maybe next google OS will,
    generally this is BS, there should be something somewhere,
    yet unless it'd appear suddenly or someone would create a serious one,
    that indeed is quite a problem .
    Last edited by onex; 02-20-2010 at 11:55 PM.

Page 3 of 3 FirstFirst 123

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •