Page 10 of 14 FirstFirst ... 78910111213 ... LastLast
Results 226 to 250 of 348

Thread: Vertex LE vs Crucial C300

  1. #226
    Banned
    Join Date
    May 2009
    Posts
    676
    hrrm ,
    6x ACARD's go 62k on PCMV and 2 C300 and 2 LE's go 100K?
    and these acards win the PCMV world record by a decent margin over 8 C300?
    what exactly is going on ?

    results over these benches seem non scientific at all, every bench shows different figures, it is almost impossible to get a real approximation of any hardware behavior over any real time work load .

    as for the cache on the 1231ML vs the ICHR, maybe it's the protocol differences between the PCI-e and the south bridge interconnects (DMI), the raid card could be suffering from it's own overhead, maybe an HBA such as the 9211 would act differently or more closer to the south bridge,
    it should be tested maybe with the same SSD's setup, to figure out the real difference, the 2GB the acards give vs the 3GB the 2 C300 and 2 LE's isn't a severe difference,
    it could be burst speed, yet if this benchmark is being finished in a second for 3GB of data transfer then this is all cache operation.

    on a different matter,
    how in hell, did IOMeter random read 4KB test with this setup did 510MBps vs the CDM 31MBps?
    there isn't seem to be any queue depth involved as the average access time is at ~0.4ms (or either it's 32 which is still very high compared to 4 X25-V's at 290MBps).
    and last last thing is, it seems the merge between the LE's and the C300 has droped down the C300 relatively high 0.7 write access time to only 0.212).

  2. #227
    Xtreme Member
    Join Date
    Aug 2009
    Location
    Nelson, New Zealand
    Posts
    367
    Great data and graphs.

    It would be interesting to compare the C300 to the X25-M on an equal-cost basis -- say two C300 128GB R0 vs. four X25-M 80GB R0.
    Last edited by AceNZ; 05-03-2010 at 11:26 PM.

  3. #228
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by onex View Post
    hrrm ,
    6x ACARD's go 62k on PCMV and 2 C300 and 2 LE's go 100K?
    and these acards win the PCMV world record by a decent margin over 8 C300?
    what exactly is going on ?

    results over these benches seem non scientific at all, every bench shows different figures, it is almost impossible to get a real approximation of any hardware behavior over any real time work load .
    Don't confuse the PCMV Suite with the HDD test.
    As I've commented with my screenshots several times, although not in the last screenshot, the HDD test is performed on a non OS drive.
    It is bootable however, meaning it's a standard ICH array created using IRST.

    The reason to the ACards are winning over the 8R0 C300 comes down to the fact that the ACards are great at QD1, whereas the 9260 needs a much higher QD to perform. (of course there is more to it)

    Quote Originally Posted by onex View Post
    hrrm ,
    on a different matter,
    how in hell, did IOMeter random read 4KB test with this setup did 510MBps vs the CDM 31MBps?
    there isn't seem to be any queue depth involved as the average access time is at ~0.4ms (or either it's 32 which is still very high compared to 4 X25-V's at 290MBps).
    and last last thing is, it seems the merge between the LE's and the C300 has droped down the C300 relatively high 0.7 write access time to only 0.212).
    QD on the iometer test is 64, I think you've asked that question earlier.
    The C300/LE combo worked great, highly unusual but it worked.

    Quote Originally Posted by AceNZ View Post
    Great data and graphs.
    It would be interesting to compare the C300 to the X25-M on an equal-cost basis -- say two C300 128GB R0 vs. four X25-M 80GB R0.
    Thanks,
    Are you thinking of the 256GB C300 or the 128GB?
    The 4R0 X25-M's are doable, in fact they are already done.

    GullLars has got the iometer result files for 1-4 X25-M drives and has already created a few graphs.
    He'll probably create a separate thread for the graphs.
    Last edited by Anvil; 05-04-2010 at 12:53 AM.
    -
    Hardware:

  4. #229
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Anvil - enjoying watching! I suspect c300's @ 4.3 pcie 119 will be nicely higher and then Frankentein another nice step up.
    Last edited by SteveRo; 05-04-2010 at 01:32 AM.

  5. #230
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Steve,
    My computer locked up at 4.4GHz during one of the tests so I need to get it stable before continuing
    It did show an increase in CDM compared to 4.3GHz.
    -
    Hardware:

  6. #231
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Anvil - i think you might be right regarding acards vs the new high end SSDs when it comes to pcmv suite test.
    When i overclock an array of acard 9010s (4.8, pcie 119) i can get 4k random reads at low qdepth of approx 70MB/s.
    What ddo the best SSD arrays get - seems like around 30MB/s and when overclocked - 35-40?

  7. #232
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Anvil - what are you using for cooling? what proc are you using?

  8. #233
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Steve,
    I'm still on the 920 D0, air cooling (idle temp ~40C @ 4.4GHz), I'm sure 4.4 and maybe 4.5 is doable on air, at least for the HDD tests.
    -
    Hardware:

  9. #234
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    The good news is that the 920 D0 is virtually indistructable and you have one of the best mobo's for oc'ing them.
    Keep an eye on your temps that will be your limiter.
    All i7s have a temp throttle built in and will kick in at 100-101C - I know first hand, I have tested for it.
    I recommend realtemp.
    In the UD7 bios you can take the temps right up to the red colored values.
    I have even gone into the red for vcore with no problems but your cooling has to be able to handle it.
    Obviously don't let this run overnight, the only 980x retail (reported) to be killed so far was with high voltage running a 100% WCG load 24/7 for 5 days.

    edit - 980 dead - http://www.xtremesystems.org/forums/...7&postcount=31
    Last edited by SteveRo; 05-04-2010 at 02:13 AM.

  10. #235
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Steve,
    I'll look into the OCing later today, I do use RealTemp for monitoring as well as CPU-Z for the VCore.

    Bad luck on that 980, I suppose it's just one of those things that happens.
    -
    Hardware:

  11. #236
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Crucial just posted the new firmware

    Release Date: 5/4/2010
    Change Log:
    Improved Power Consumption
    Improved TRIM performance
    Enabled the Drive Activity Pin (Pin 11)
    Improved Robustness due to unexpected power loss
    Improved data management to reduce maximum write latency
    Improved Performance of SSD as it fills up with data
    Improved Data Integrity
    Note: This requires a Low Level Format to the SSD which will erase any data on the drive.
    Please ensure that your data is backed up prior to performing the Firmware Update. We are hopeful that future Firmware revisions/updates will not be destructive.

    Link to download

    Beware that it is a destructive firmware update!

    Quite a few changes.
    -
    Hardware:

  12. #237
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    That looks like a promising f/w update.....

    Anvil, have you seen any difference in RAM or CPU usage between the LE and C300? (I.E is there a significant difference in RAM or CPU usage when doing the same tasks with each SSD).

  13. #238
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    I would speculate the main points of this firmware update, by reading through the lines, can be summarized:
    *Holding less data for writing in cache, with a more rapid flush cycle
    *GC acting on TRIM information faster
    *Streaming reads are read more efficiently (read-ahead to internal cache), and possibly putting NAND devices to sleep or low-power state when the device is inactive or in constant low activity

    If the C300 indeed uses the clean block pool writing method (as i suspect) there is little or no need to hold userdata in cache when there are clean blocks on idle flash cannels, only a short queue to distribute across all free channels (a few KB to a few hundred KB). This change would: *Improve data integrity, increase robustness to unexpected power loss, and reduce maximum write latency.
    With GC acting more aggressivly on TRIM, this would also help replenish the clean block pool faster and possibly eliminate the need for the controller to wait for an erase to be able to write, reducing maximum write latency. It would also make it easier to keep the clean block pool from being depleted when working near full capacity.

    I don't know if any of this is the case, but it would make sense from what i understand of the architecture and the change log. If i'm wrong, pleese correct me, and if you dislike me sharing thoughts and speculations, pleese say so.

  14. #239
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Hi Audiencofone

    No, not really.
    I've been using the LEs for my VMWare stuff and it's been a joy, the Crucial was used in my laptop for about 3 weeks so I haven't really had time to compare them side by side.

    I guess I need to run a few iometer configs tonight, I'm a bit excited about the reduced write latency.

    If all goes well I'll compare them using VMWare this weekend. (don't know how to compare them yet but I'll try to figure something out)

    edit:

    @GullLars,
    Your thoughts are always welcome

    edit2:
    Upgrade went well, no problems, IDE mode is required though.

    About to check the latency within a few minutes.
    Last edited by Anvil; 05-04-2010 at 08:39 AM.
    -
    Hardware:

  15. #240
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    First benchmark using FW0002
    ICH10R

    I can't see any write latency improvement using AS SSD, base score has improved a bit though.

    as-ssd-bench C300-CTFDDAC256M 04.05.2010 19-12-35_ICH.png

    cdm3_ICH.PNG

    Looks like 4KB reads have improved, as well as generally high values. (clean drive)

    edit:
    iometer result 512B_256KB_ICH_C300_fw0002_4KB_aligned.zip

    6GBs coming up in half an hour.
    Last edited by Anvil; 05-04-2010 at 10:02 AM.
    -
    Hardware:

  16. #241
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Anvil, they said maximum write latency, not average. This means if you give me an old IOmeter run of random writes, and a new one, it should show when comparing max access time.
    I don't know wether you did test random writes before flashing, so we may not find out.

  17. #242
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    OK GullLars,
    I misread the max latency part.
    Here is 6GBs fw 0001 512B_256KB_9128_C300_256GB_4KB_6G.zip

    I forgot to select the C300 on the last run so I have to rerun the test.
    (ended up with a Kingston result file )
    -
    Hardware:

  18. #243
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Anvil View Post
    First benchmark using FW0002
    ICH10R
    WOW.

  19. #244
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    I would wait on updating f/w on any more c300's -
    Check this out - http://www.anandtech.com/show/3694/c...0-firmware-fix
    and this at crucial - http://forum.crucial.com/t5/Solid-St...ead/td-p/12363

    Very nice benches! 32-35MB's 4k reads - (AS SSD and CDM) that's an improvement isn't it?
    Last edited by SteveRo; 05-04-2010 at 12:13 PM.

  20. #245
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Steve,

    I've posted in that thread, one needs to select Native SATA mode (Legacy) on the GB boards.
    The firmware shouldn't brick the drives though.

    Both my drives are working just fine.

    @GullLars
    iometer random write runs (C300 single drive, old and new firmware)
    4KB_64KB_9128_C300_256GB_6G_RWrite.zip

    edit:
    Steve,
    Yes it looks like a small increase in 4KB random reads.
    -
    Hardware:

  21. #246
    Xtreme Enthusiast
    Join Date
    Feb 2010
    Posts
    701
    I may wait until there is an update by crucial to that thread. Anxious to see if pcmv hdd suite likes the new fw.
    slowpoke:
    mm ascension
    gigabyte x58a-ud7
    980x@4.4ghz (29x152) 1.392 vcore 24/7
    corsair dominator gt 6gb 1824mhz 7-7-7-19
    2xEVGA GTX TITAN
    os: Crucial C300 256GB 3R0 on Intel ICH10R
    storage: samsung 2tb f3
    cooling:
    loop1: mcp350>pa120.4>ek supreme hf
    loop2: mcp355>2xpa120.3>>ek nb/sb
    22x scythe s-flex "F"

  22. #247
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    mbreslin,

    Keep us updated on your score
    It does look better in some areas.

    --

    Here's the same benchmarks for the Marvell 9128 controller

    as-ssd-bench C300-CTFDDAC256M 04.05.2010 22-23-22.png

    cdm3_Marvell.PNG

    The AS SSD score is actually better using the ICH.
    (nothing to do with the drive, its the controller)

    @GullLars
    512B_256KB_Marvell9128_C300_fw0002_4KB_aligned.zip
    -
    Hardware:

  23. #248
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    and oc 4.3/pcie 119?

    edit - also - anxious to see the R0's

  24. #249
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I tried 4.3 pcie 119 and there are small improvements both on read and write. (4K)

    Keep in mind that that first CDM was on a clean drive. (it's still clean but not superclean)

    CDM3_ICH_43_pcie119.PNG

    I'll try to get it stable at 4.4 tomorrow and try all drives.
    (had to reinstall due to the lockup yesterday , W7 started but damage was done to the registry)
    -
    Hardware:

  25. #250
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Anvil, in all my freezes and abends in trying to push the envelope with linx stable testing and pcmv runs in the past year - I think a had to reinstal maybe once. If you image the drive before you start ocing that should make for an easy fix the next time (which should be a long time).

    edit - was the above with wbc on?
    Last edited by SteveRo; 05-04-2010 at 02:02 PM.

Page 10 of 14 FirstFirst ... 78910111213 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •