Page 2 of 3 FirstFirst 123 LastLast
Results 26 to 50 of 63

Thread: Vertex 4 F/W 1.3 vs 1.4 across ich10r, pch, Areca 1261 and 1880 and ...

  1. #26
    RAIDer
    Join Date
    Jul 2005
    Location
    Norway
    Posts
    699
    Quote Originally Posted by Anvil View Post
    ... and the fix might be here already

    interim fw 1.4.1.3

    A two step update (read the first post in the link)

    Improvements since version 1.4:
    Enhancement: Improved RAID card compatibility.
    •Fixed: Corner case issue where drive would not resume from S3/S4 properly.
    •Fixed: Rare issue where some platforms would not respond correctly after a cold boot.

    I'll be updating shortly.


    Maybe I have to buy them again I`ll wait for tests

  2. #27
    Registered User
    Join Date
    Apr 2012
    Location
    West Sussex, England
    Posts
    32
    Hi,

    For interest here are a couple of benchmarks I've run this evening on 2 x 256GB V4s in R0, with the latest 1.4.1.3 fw.

    Click image for larger version. 

Name:	V4 RAID 0_512GB_1GB-20120609-2120.png 
Views:	332 
Size:	163.1 KB 
ID:	127465


    Click image for larger version. 

Name:	as-ssd-bench V4 RAID 0 09.06.2012 20-37-33.png 
Views:	334 
Size:	50.2 KB 
ID:	127466Click image for larger version. 

Name:	as-ssd-bench V4 RAID 0 09.06.2012 20-39-01.png 
Views:	338 
Size:	50.5 KB 
ID:	127467

    Some observations I'd like to share based on my personal testing and use of 2 x 256GB V4s in R0:

    1. Just like a single V4 my array has not dropped in performance regardless of how hard I bash it
    2. I've noticed that a few seconds after deleting a large file (e.g. an 80GB file full of random data) there is a burst of continuous disk activity (HDD LED glows continuosly) - this appears to me to be just the same as is evident for a single V4 with trim pass through in action. It seems to me that the V4 invokes some form of on the fly GC activity regardless of whether trim pass through is in action or not. With V4s in R0 my feeling is there is no need to hold our breath waiting for trim to be enabled for arrays.
    3. I've also noticed that the peak of performance on my array does not occur straight after a Secure Erase. I have found that the V4 hits a peak in benchmarks when all blocks have been written to at least once and the pool of available blocks is relatively small. For example, my best results have come when I have filled the drive with random data files and have left just 50GB free and then have written and deleted another 30GB random data test file (all in one Windows session). Of course all this is artificial/synthetic nonsense to hit an extreme result in a benchmark and does not really have much to do with the day to day real world - but as this is the extreme storage forum I thought you may be interested

    Regds, JR

    p.s. here's a link for a handy little test file creation utility http://www.softpedia.com/get/System/...leFiller.shtml
    Last edited by JR.; 06-09-2012 at 01:19 PM.
    Asus P8Z77 WS; Core I7-3770K; 16GB Corsair Dominator Platinum 2400MHz; 2 x Asus GTX580 SLI; 2 x OCZ Vector 256GB in R0

    Dell XPS 17; Core I7-2670QM, GT555M; 2 x OCZ Vertex 4 256GB

  3. #28
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    If you get a lucky run you can get full scaling in raid-0. (at sequentials and high QDs)

    I'm playing with strip sizes in W8 on SB.
    -
    Hardware:

  4. #29
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    JR - wow! - very nice numbers!

  5. #30
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    f/w 1.4.1.3 is a little faster on the 128s
    Next up - see if they work on the 1880 now.




  6. #31
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Looks like f/w 1.4.1.3 is still broken on the 1880

  7. #32
    SSD faster than your HDD
    Join Date
    Feb 2005
    Location
    Kalamazoo, MI
    Posts
    2,627
    I let the devs know Steve.

  8. #33
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Quote Originally Posted by RyderOCZ View Post
    I let the devs know Steve.
    Now I am not sure - I can't seem to format 2xR0 C300/64s now in the same setup - might just be me - I will need to research some more!

  9. #34
    SSD faster than your HDD
    Join Date
    Feb 2005
    Location
    Kalamazoo, MI
    Posts
    2,627
    How many Vertex 4 drives are you running, just 2? 128GB, correct?

  10. #35
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    ^^ correct just 2x128s

  11. #36
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Someone else should confirm but looks like Vertex 4 f/w 1.4.1.3 is still broke on the areca 1880


  12. #37
    SSD faster than your HDD
    Join Date
    Feb 2005
    Location
    Kalamazoo, MI
    Posts
    2,627
    The devs and test team have gotten back to me, they have used FW 1.49 on the 1880i with no issues.
    64k stripe, all cache settings disabled.

    Please list your full config for me.

  13. #38
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Mr Ryder sir, thanks for the help - config as shown here - http://www.xtremesystems.org/forums/...=1#post5109283

    Also tried with latest areca f/w dated early 2012 - no go

    I wasn't able to get them to work with all cache settings disabled either - but even if Vertex 4 did work that way - I think most folks would say that would be an unacceptable "perminant" fix.

  14. #39
    SSD faster than your HDD
    Join Date
    Feb 2005
    Location
    Kalamazoo, MI
    Posts
    2,627
    I meant your system config please.. what motherboard, CPU, how much memory, etc you are running.

  15. #40
    SSD faster than your HDD
    Join Date
    Feb 2005
    Location
    Kalamazoo, MI
    Posts
    2,627
    After you updated to 1.4 and then after you updated to 1.4.1.3 did you try secure erasing the SSD's?

  16. #41
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    it is the Areca IX versions that have the problem with marvell controllers that are used on several types of ssds. testing with the I version will not show the problem, as it does not have the expander that is creating the issue.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  17. #42
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I'll check on mine later tonight.
    (need to get to the computer with the Areca)

    Will check the LSI as well. (9265)
    -
    Hardware:

  18. #43
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Good morning Mr. Ryder -

    I have tried on 2 different systems with the same incompatibility result -

    system 1 - 2600K on P67 asus maximus IV extreme with kingston hyperx KHX2000C8D3T1K3/6GX 2x2GB
    system 2 - 920 on x58 gigabyte ex58-extreme with corsair CM3X2G1600C8D 6x2GB

    much thanks for the help!
    Last edited by SteveRo; 06-12-2012 at 03:22 AM.

  19. #44
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Quote Originally Posted by RyderOCZ View Post
    After you updated to 1.4 and then after you updated to 1.4.1.3 did you try secure erasing the SSD's?
    No just reformat and go - secure erase via vertex tool box?

  20. #45
    I am Xtreme
    Join Date
    Aug 2008
    Posts
    5,586
    yehey


  21. #46
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I'm seeing the same thing as Steve and I'm using the Areca 1880IX-16 4G

    2R0 Vertex 4 256GB on fw 1.4.1.3

    1880IX-V4_S6.PNG

    ATTO pretty much shows what's going on in other benchmarks as well. (and so I won't post others)
    ATTO works it's way slowly through the steps, very slow.

    ---

    I tried a single drive in PassThrough mode as well

    1880IX-V4_S7_passthrough.PNG
    Last edited by Anvil; 06-12-2012 at 02:01 PM.
    -
    Hardware:

  22. #47
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    ^^ thanks for confirming ...

    edit - What Paul said - probably the expander - I think the expander was the reason why acards could only connect to 1880ix at sata1.
    How about the 9265?
    Last edited by SteveRo; 06-12-2012 at 02:13 PM.

  23. #48
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I'm about to test on the LSI, 10-15 minutes

    raid-0 created, all is OK so far

    NRA = No Read Ahead
    DIO = Direct IO
    WT = Write Through

    ATTO_NRA_WT_DIO.PNG

    I'll change some parameters and run with caching ++
    (just to see if there is something weird)

    Changed from WT to WB (WriteBack)

    ATTO_NRA_WB_DIO.PNG
    Last edited by Anvil; 06-12-2012 at 02:55 PM.
    -
    Hardware:

  24. #49
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    ^^ very nice numbers - about right??

  25. #50
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Hi Steve

    I'll have to do some more tests, it's not acting up like on the Areca.

    (I'll compare to a 2R0 V3 240GB later tonight)
    -
    Hardware:

Page 2 of 3 FirstFirst 123 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •