Results 1 to 10 of 10

Thread: Xtreme SSD benchmarking

  1. #1
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349

    Xtreme SSD benchmarking

    Actually, it's probably not all that extreme, but mehhh....it's my first (own) foray into SSD benchmarking.

    Up for testing is a OCZ Vertex 3 240 GB SATA-3 SSD and an OCZ RevoDrive 3 240 GB PCIe x4 SSD.

    Unfortunately, I don't have any other really newer systems that I can use for benchmarking (and my daily driver is occupied right now), so I had to use one of my older Socket F Opteron-based systems. So, that being said, that does NOT give it the full potential for these drives, BUT for relative and comparative purposes; it should be ok.

    I'll know more once I start beating/kicking the crap out of it when I start running my FEA simulations on/with it.

    Note:
    I did NOT know that the Revodrive ONLY works with Windows 7 x86/x64. So, I have installed Windows 7 Ultimate x64 on a Hitachi 1 TB 7200 rpm SATA drive right now so that I can run the benchies/tests on the SSDs without the OS being ON said SSDs. I don't think that that makes a difference, but it's an important note, I think. Also, the Hitachi 1 TB is on the LSI1068E SAS 3Gbps controller, so I took it off the NFP3600 so that way the OS I/Os should be independent of the benching I/Os.



    So that's the Vertex 3.



    And here's the RevoDrive 3.

    Two things:

    1) Remember that the Vertex 3 240 GB SATA SSD is on a 3Gbps SATA-2 controller (I think). I forget if the NFP3600 is SATA-II or SATA-I. If it is just SATA-I, then the data rates that it's getting -- it's probably hitting the limits of it, so it isn't that surprising. I CAN put it on the SAS 3Gbps controller instead, but I wanted to make sure that I would be able to pass the TRIM command to them after testing because I know that the SAS controller won't.

    2) The RevoDrive 3 240 GB PCIe x4 SSD card is designed with Gen2 PCIe slots in mind, but I think that my board is PCIe gen1 only, so it only gets half the theorectical bandwidth. (And it doesn't really matter that it's physically plugged into an x16 slot, despite the x4 connector).

    So, again, the RevoDrive isn't tested to it's full potential either. But, you can see the huge difference. Even if the Vertex 3 is on SATA-3 6Gbps, it can only cap off at the 6 Gbps limit, while the RevoDrive 3 is already beyond that with a Gen1 PCIe slot.

    Capacity: CHS=(29185/255/63), 468857025 sectors = 228934 MByte

    Interface transfer rate w/ block size 128 sectors at 0.0% of capacity:
    Sequential read rate medium (w/out delay): 99516 KByte/s
    Sequential transfer rate w/ read-ahead (delay: 0.71 ms): 110602 KByte/s
    Repetitive sequential read ("core test"): 91401 KByte/s
    Sequential write rate medium (w/out delay): 89806 KByte/s
    Sequential transfer rate write cache (delay: 0.78 ms): 98928 KByte/s
    Repetitive sequential write: 90594 KByte/s

    Sustained transfer rate (block size: 128 sectors):
    Reading: average 101811.4, min 98033.1, max 110715.8 [KByte/s]
    Writing: average 90492.5, min 89426.0, max 92453.0 [KByte/s]

    Random access read: average 0.24, min 0.14, max 0.36 [ms]
    Random access write: average 0.26, min 0.15, max 2.52 [ms]
    Random access read (<504 MByte): average 0.20, min 0.10, max 1.44 [ms]
    Random access write (<504 MByte): average 0.19, min 0.07, max 0.90 [ms]

    Application profile `swapping': 62307.9 KByte/s
    Application profile `installing': 145727.0 KByte/s
    Application profile `Word': 96627.1 KByte/s
    Application profile `Photoshop': 95646.6 KByte/s
    Application profile `copying': 142564.1 KByte/s
    Application profile `F-Prot': 79350.6 KByte/s
    Result: application index = 96.8
    Here's the h2benchw result for the Vertex 3. I LOVE this benchmark because it is the ONLY one that has a "swap" application/test profile which mimicks swap behavior and for me, that's a key/critical factor of performance. (Most poeple probably wouldn't really care, but when my record is a 90 GB swap file on a system with 128 GB of RAM; it matters. It matters GREATLY!)

    Capacity: CHS=(29186/255/63), 468873090 sectors = 228942 MByte

    Interface transfer rate w/ block size 128 sectors at 0.0% of capacity:
    Sequential read rate medium (w/out delay): 197683 KByte/s
    Sequential transfer rate w/ read-ahead (delay: 0.36 ms): 217136 KByte/s
    Repetitive sequential read ("core test"): 157648 KByte/s
    Sequential write rate medium (w/out delay): 210014 KByte/s
    Sequential transfer rate write cache (delay: 0.34 ms): 252936 KByte/s
    Repetitive sequential write: 254520 KByte/s

    Sustained transfer rate (block size: 128 sectors):
    Reading: average 189797.7, min 178997.9, max 212594.1 [KByte/s]
    Writing: average 211219.4, min 206216.6, max 255300.0 [KByte/s]

    Random access read: average 0.25, min 0.14, max 1.55 [ms]
    Random access write: average 0.26, min 0.15, max 2.57 [ms]
    Random access read (<504 MByte): average 0.24, min 0.10, max 2.65 [ms]
    Random access write (<504 MByte): average 0.25, min 0.09, max 2.59 [ms]

    Application profile `swapping': 78163.8 KByte/s
    Application profile `installing': 232000.1 KByte/s
    Application profile `Word': 197252.4 KByte/s
    Application profile `Photoshop': 176655.2 KByte/s
    Application profile `copying': 271686.0 KByte/s
    Application profile `F-Prot': 102337.5 KByte/s
    Result: application index = 157.5
    And here's the RevoDrive 3 h2benchw results. Interesting to note that despite it's HUGE sequential transfer rate (STR) advantage, when it comes to swap; it doesn't amount to much. An additional 15 MB/s, but the cost differential is about $100 or so at the time of purchase.

    Hopefully this info helps for those who might find it useful/good-to-know.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  2. #2
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Okay, so I checked the manual and the nVidia website for the NFP3600+3050 and they are SATA-II ports.

    I'll probably do one more test where I move them over to the LSISAS1068E just to be safe/sure and post the results here.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  3. #3
    I am Xtreme
    Join Date
    Dec 2004
    Location
    France
    Posts
    3,462
    i think yoru vertex 3 is running slow, no?

  4. #4
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Read.

    But according to the Anandtech preview article, it should be faster, yes.

    http://www.anandtech.com/show/4186/o...cused-sf2200/7

    But some things to note that are different is that he's testing on board that's actually capable of SATA-3 6 Gbps, whereas I can't. (Not yet anyways). And that he's also using a shallower queue depth.

    Also, what I'm noticing is that if I don't TRIM after each run, the drive performance for the next test kind of suffers a little bit. (I'm actually using HDTach to confirm that right now).

    But yes, this isn't the best that the Vertex 3 has done, not by a long shot. I'm only two days into testing though, so there are more benches that are coming up/through.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  5. #5
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    So it looks like that I must be hitting some kind of limit on the drive or through the system or something.

    I plugged the Vertex 3 into the SAS controller and this is the HDTach plot that I get:



    So it can be anything from the CPU to the memory to the controller itself, to the drive. So, I don't know. Any suggestions/ideas/things that I can try/test out?
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  6. #6
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    So here's yet another interesting morsel for y'all:

    The RevoDrive 3 is dropping out during HDTach.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  7. #7
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Okay, so here's the final word:

    I got in touch with OCZ's support forums and also submitted a ticket. But I'm going to be returning the RevoDrive 3 tomorrow because after trying a myriad of dizzing combinations and hardware configurations; the RevoDrive 3 ended up with at least 24 unexpected power losses while running the write portion of the HDTach benchmark. All the support forum guy asked was whether I can run/test it on one of the compatible motherboards, but I don't have it (as it will require a new system build), and yea...I'm not going to do that.

    *edit*
    AND that periodically, it WILL and DOES cause a BSOD. So, that means that if you're thinking of putting your OS on this, the sudden and unexpected power losses DOES have a potential to corrupt your Windows install. So be warned.

    What's weird is that I was able to do some of the testing, but the moment that I started trying to chase down why the Vertex 3 was showing low(er) than expected numbers, the problems with the RevoDrive 3 started popping up and dominating/dominated.

    Once I do my next system build, I MIGHT try it again; but we will have to see.

    The Vertex 3 is still showing the low numbers. I'm only able to get a max of 140 MB/s write on it, which isn't as high as what Anandtech was showing, but I also suspect/am pretty sure that my system doesn't have AHCI, which might account for the difference.

    I'm putting it through the final rounds of initial testing now, knowing that there might be something at the system level that's hindering it's performance. And then, at least with the Vertex 3, I can move on to doing the application-level testing. (Finally).
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  8. #8
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    Testing for the SSD has concluded. I think that there's something wrong either with the system, the SSD, or the inconsistencies with the testing itself.

    Running an Ansys Workbench Mechanical FEA:
    with 10 equilibrium iterations:
    Iterative solution:
    CPU time: 22470.930 s
    Total time: 33636.000 s

    Direct solution:
    CPU time: 38511.188 s
    Total time: 57900.000 s

    Running the same FEA using a 2.5" Fujitsu 73 GB 10krpm SAS 3 Gbps
    Iterative:
    CPU time: 4050.203 s
    Total time: 6122.000 s

    Direct:
    CPU time: 14830.000 s
    Total time: 35797.000 s

    Using my 8-core (4x AMD Opteron 880 (2.4 GHz, 2-core)) system, which uses two Hitachi 146 GB 15krpm Ultra320 SCSI hard drives, and using all 8-cores:
    Iterative:
    CPU time: 3733.672 s
    Total time: 4073.000 s

    Direct:
    CPU time: 4904.562 s
    Total time: 14812.000 s

    I'll probably have to play around with it some more at a later date, and try and see if I can get to the bottom of it.

    I ended up returning all of the drives given that the RevoDrive 3 couldn't stablize properly.

    But I probably won't do more testing until I actually do my new system build.
    Last edited by alpha754293; 10-21-2011 at 06:20 PM.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

  9. #9
    Xtreme Member
    Join Date
    Apr 2011
    Posts
    128
    Maybe this is a stupid question, but does this bench is the best for Kisk performance?

  10. #10
    Xtreme Member
    Join Date
    Apr 2006
    Location
    Ontario
    Posts
    349
    I think so. I think that it gives a much better, more complete picture of performance.

    A lot of other sites just use the standard PCMark or something along those lines (Intel IO Meter); but the reality of those tests is that it depends on how you use your drive and what you're going to be using it for.

    I think that the h2benchw gives a more accurate perspective of performance, but it really depends on what you're going to be doing. For me, I was looking to use SSDs as my swap file because swap performance has a HUGE impact on the overall performance of my simulation runs. So, I was very much interested in those numbers, and it appears that NONE of the usual computer sites even run this bench.
    flow man:
    du/dt + u dot del u = - del P / rho + v vector_Laplacian u
    {\partial\mathbf{u}\over\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -{\nabla P\over\rho} + \nu\nabla^2\mathbf{u}

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •