Results 1 to 25 of 807

Thread: Areca 1880

Threaded View

  1. #11
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    Thank you biker, for your guidance with the ram, i have sent an email to areca about that stick, and will hopefully receive a reply back soon. it troubles me they would link an discontinued stick. hopefully this will help to remedy the situation...
    with two failed attempts though i need to check better

    Now, back to the show with some real results for once.
    Kinda receiving some strange results here. Some of these 4k results seem a little to good to be true so i did some things to try to make them as accurate as possible..logged off and ran GC for a few hours...
    *one worker
    *4GB test file, four times larger than the cache
    *turned off hdd read ahead
    *turned off volume read ahead
    *enabled write-thru cache to eliminate cache usage (even though it is for writes, just trying to take cache totally out of the equation.

    Now, i ran this 4k profile with a 5second ramp, two minute run, mainly because that is how i ran my fastpath benches earlier.
    a thing of note here: even though i shut off cache, and read ahead, on the QD1-3 the number would start low, then slowly climb over the course of the two minute run. for instance QD started at 30 then climbed steadily to its final result.
    not sure why, with all the precautions taken to eliminate caching and read ahead. over those lower QD though, the mb/s would not stray more than 2 or 3 during the run for all other higher QD.

    the LSI 9260-8i with FastPath is on the left, the Areca 1880IX-12 is on the right.




    Some conclusions (wild speculations, etc, in italics ) regarding this....

    unless my array is degrading faster than any array I have ever seen, as i only wrote one 4gb testfile, then ran the reads concurrently without stopping....or there is something strange going on here.
    seems at the higher QD the controller hits a 'hole' of sorts at the higher QD.
    there is a point of diminishing returns very quickly. I will let my drives GC overnight, then run the higher QD in the morning again to try to rule that possibility out of the equation. If the preliminary analysis holds true, then this is something that might be addressed in future firmware revisions, etc.

    Now, would it need to be? the low QD is so superb, that for OS usage it looks to be ridiculously powerful! any high QD situations should be rare indeed, as with the latencies involved there would be resolved very very quickly. of course server use would suffer...but man the low QD is good.

    I think this high QD 'hole' is what is leading to some less-than-spectacular results with the as ssd, CDM, etc. have ran some cursory PCMV with this array, and i am easily getting within 1k points of my highest, and that is with only 4.2 setting and none of my real tricks I am leveraging cache a bit, but hey.. once i get some ram i will go to battle stations

    the difference in the low QD latency is astounding. I would love to see gullars compare that to the latencies involved with the 9260 for his QOS numbers...
    going to load some games up, let the gc do its thing, and do some load testing in the AM. havent set it up yet in soft-raid, but dont really care to. i wouldnt use it for soft raid, ever. However, seeing some of mr nizzen's fantastic results i might do it. he has hit .05 latency when measured with everest in pass thru mode.
    Some of my initial qualms have been allayed a bit...array seems to be working well, and these results are encouraging. also gonna do some 4k sequential and see what we get.
    An unfortunate downside of the 9260-8i with fastpath is that it hurts write speeds tremendously, because you have to run write-thru. Even when i have made my best PCMV runs, i have to do it in write back with the 9260. it just penalizes too much for slow writes on the benchmark. FastPath does deliver a tremendous boost with write back enabled, but you do not see the true potential of the awesome reads, you still leave about 30 percent on the table with write back enabled.
    early indications is this is not a flaw that the areca suffers. so has areca struck a middle ground here? or hit a home run? if you enable read ahead etc...the game changes a bit, so i will try to explore that a bit as well...the sequentials benefit tremendously more (with read ahead) than they do with the LSI, arecas read ahead algorithms are just flat out better imo.
    Last edited by Computurd; 08-27-2010 at 10:12 PM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •