Results 1 to 25 of 502

Thread: Anvil's Storage Utilities

Hybrid View

  1. #1
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    If it is not an Iodrive/ACARD then over 40mb/s is not possible unless it is a cached result.
    9265 can do that @ QD1 R5. actually 54 @ QD1
    these results with C300. I bet these 8 x 128 wildfires can beat this. only 512 cache on this card when test was done, and it is direct i/o anyways (FP uses no cache)
    LINK
    when i mention caching coming into play in the header above that, i am speaking to the seq write speed.

    @Anvil- i would like the option to retain a static test file as well. This would be helpful. Even with the *now* well-known uber longevity of these drives, no need to speed along degradation in raid arrays.
    -also one last wish 512B @ QD 128 so i can show 465,000 IOPS

    doing some long-run tests this week for a article, but i will put the toys up and do some playing soon so i will post some results.


    results over 92ish MB/s in QD1 4KB reads must be cached since nothing can do more than that.
    The achitecture of RAID controllers and how they fundamentally operate, and thus low QD performance, are about to be turned on their heads.....12Gb/s plugfest is going to bring about a change in RAID controllers that is going to be just unbelievable.

    I will say this...The Fusions/ I/O Extremes, etc, will soon see the playing field changed very dramatically.
    Last edited by Computurd; 08-05-2011 at 06:25 PM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  2. #2
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by Computurd View Post
    9265 can do that @ QD1 R5. actually 54 @ QD1 :
    That result is partially cached. Iometer with a very large test file will not show those numbers. You can see from the latency that the real QD1 random reads are around 32MB/s for that config of yours. Edit: you even show yourself that the real QD1 RR number is 31MB/s later in the review...

    RAID controllers are not able to increase 4k QD1 random read performance above what the SSDs themselves are capable of because it is impossible since we are just talking about raw latency here; i.e. the time it takes your devices to respond to a small block read command. The absolute best a controller can do is not add any overhead on top of that. It can not make any device respond faster than they are capable of no matter the kinds of voodoo magic you believe in
    Last edited by One_Hertz; 08-05-2011 at 08:04 PM.

  3. #3
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by One_Hertz View Post
    RAID controllers are not able to increase 4k QD1 random read performance above what the SSDs themselves are capable of because it is impossible since we are just talking about raw latency here; i.e. the time it takes your devices to respond to a small block read command. The absolute best a controller can do is not add any overhead on top of that. It can not make any device respond faster than they are capable of no matter the kinds of voodoo magic you believe in
    Nicely explained. It is surprising how often people forget that simple fact.

  4. #4
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by Computurd View Post
    @Anvil- i would like the option to retain a static test file as well. This would be helpful. Even with the *now* well-known uber longevity of these drives, no need to speed along degradation in raid arrays.
    -also one last wish 512B @ QD 128 so i can show 465,000 IOPS

    doing some long-run tests this week for a article, but i will put the toys up and do some playing soon so i will post some results.
    I've been too busy lately, I've already made the necessary adjustments to retain the test files and will send you that build later tonight.
    Will need some feedback on that though as SF based drives are behaving "differently" from other drives, especially in this regard. (static test files)
    -
    Hardware:

  5. #5
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    It would be more realistic for the default of ASU to use 100% incompressible data.

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •