Page 5 of 14 FirstFirst ... 2345678 ... LastLast
Results 101 to 125 of 348

Thread: Vertex LE vs Crucial C300

  1. #101
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    onex,

    1) I know , but testing for QD 1-8 is just not enough, I do use my computers as servers so QD > 16 or more is not unusual.
    2) 4KB reads @ QD1 are generally slower than writes.
    3) like GullLars says, the screenshot from Legit should never have been shown , I'll dig up a more representative one, just give me a few minuttes.
    4) see GullLars reply
    5) Yupp, thats the C300, access times are identical between ICH10R and the 9260 when using WT

    edit:
    Meanwhile,

    Enjoy this AS SSD Benchmark,
    It's got nothing to do with SSD's, yet, but we might get there
    as-ssd-bench 07.04.2010 23-53-26.png
    Last edited by Anvil; 04-26-2010 at 09:36 AM.
    -
    Hardware:

  2. #102
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Guill - what controller?

  3. #103
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    If you are reffering to my screenshots on the previous page. It's anvils 9260. I'll edit the post to explicitly say so, the name of the pictures do. I don't think it's the newest firmware. Performance may be better with the newest firmware.
    There is no I in GULL LARS :P

  4. #104
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    duplicate post...
    Last edited by GullLars; 04-26-2010 at 09:55 AM.

  5. #105
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Here's a more representative AS SSD Benchmark for the G2 160GB

    as-ssd-bench INTEL SSDSA2M160 18.11.2009 10-54-48.png

    I'll soon update my charts to include both the X25-E and X25-M G2. (single)

    As for the controller, it's the 9260-8i.
    I'm not sure but I think some of the benchmarks are done using the latest firmware.
    -
    Hardware:

  6. #106
    Banned
    Join Date
    May 2009
    Posts
    676
    GRAPHS:

    so, by the 64KB 2300MBps 8xG2's this should all be read test,

    and wow, that's an enormous potential gain over 64KB,
    the results seem linear till the 16KB&32KB tests and the G1's keeping themselfs high as before (at the 512B to 16KB) while the G2 start to fall behind, especially at the 32KB and above.
    and very strange results over 32KB vs the 64KB WB vs WT, seems as at high QD and larger block sizes, it's potential is being revealed,
    it would be interesting to see a 1MB test with these.

    in general, the graphs shows about the same scaling till ~16KB between 4-8 drives at QD of 16 , meaning not a significant change even over high usage and even at 32KB.
    at 64KB, the 8 RAID array start to show it's scaling, even though the 4xG1's still perform magnificently and the G2's are not far behind them ,
    even the V's for they're price and performance, are more then bearable drive ..

  7. #107
    Banned
    Join Date
    May 2009
    Posts
    676
    oh!,
    that's a nice access time!

    and lol about the earlier one .

    hopefully crucial would/could fix this with a firmware upgrade.

  8. #108
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    You can't fix the difference between IDE and AHCI mode with a firmware upgrade. The difference comes mainly from Native Command Queue (NCQ) wich is present in AHCI mode, and not in IDE mode.

  9. #109
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Quote Originally Posted by GullLars View Post
    If you are reffering to my screenshots on the previous page. It's anvils 9260. I'll edit the post to explicitly say so, the name of the pictures do. I don't think it's the newest firmware. Performance may be better with the newest firmware.
    There is no I in GULL LARS :P
    Sorry i butchered your name!

  10. #110
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    GullLars,

    He's talking about the access times on the C300, which are a bit on the high side.
    -
    Hardware:

  11. #111
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Yes, i've noticed the slightly high random write accesstime. Since the drive has pretty high random write IOPS, i'll speculate it's because of write attenuation in cache. That the controller holds the files in cache for a little while (a few 100ms) and writes them as a bigger sequential block while remapping the LBA->physical table at the same time. This could also be the reason the 128GB version has lower random write IOPS than the 256GB, since the 256GB has higher 4KB random write than the 128GB has sequential write.
    140MB/s = 35K IOPS.
    I've noticed the same tendency with SandForce numbes, but it seems SandForce holds data in cache for a shorter time and gets less lantency overhead on its write attenuation.

    Steve, you're not the first one. I made a pun with "no I in GullLars", like "no I in team". It's a bit dry humor, so i understand if you didn't get it

  12. #112
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Your'e onto something there GullLars, but.

    The SandForce doesn't have a cache, it does use compression though and that might be the reason to it's somewhat high access times. (compared to e.g. the Intels)
    -
    Hardware:

  13. #113
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    I hate to get into scemantics, but i didn't specify the size of said cache, and if it was internal (L1, L2, L3) or external (RAM). In SandForce's case, there is no external cache, but a few MB internal cache to facilitate compression, write attenuation, and a little buffering. I haven't heard any specific numbers, but i heard speculations the size was around 4-8MB. Based on what i don't know.
    Holding data in such a cache for 50-200µs for compression and write attenuation would not pose a big security risk, and little overhead for random writes of larger block sizes than 4KB. The added throughput as a result of the attenuation and compression makes up for the added latency in most cases. PCmark Vantage clearly likes it.
    For a scenario where most data could be held in cache for the longest time, 16-64KB random write (i don't know exactly where yet) of highly compressible data, allowing 250-270MB/s write speeds, filling an 8MB cache. To flood the cache would take 30-32ms, making that about the max time data could stay there. If the cache is 4MB, then 15-16ms. If you also postulate a bit of it for making the compressed copies, even less. If some of the cache would be reserved for use as read decompression, LBA tables(?) etc, still less.

    BTW, anvil, would you mind running the random read 0,5-64KB QD1-128 (v4) IOmeter config on your LE and C300? I'm thinking of using the data both for a more comprehensive look into architecture, and nice graphs for comparisons for posting later.

  14. #114
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    GullLars,

    This is what I've got, QD1-64.
    512B_256KB_9128_C300_256GB_4KB_6G.zip

    512B_256KB_ICH_LE_100GB_4KB_CLEANED.zip


    I edited out the QD128, it's gone, QD64 is more than enough imho.
    I might be able to run the original script sometime later.
    -
    Hardware:

  15. #115
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    No problem, I originaly added it to show better the scaling of RAID setups, for a single drive it won't matter much.

  16. #116
    Banned
    Join Date
    May 2009
    Posts
    676
    That the controller holds the files in cache for a little while (a few 100ms) and writes them as a bigger sequential block while remapping the LBA->physical table at the same time.
    the only issue, is that 5000IOPs * 0.7sec * 4KB = 14MB which is much bigger then the supposed cache..,
    it can't hold it for that long, and at read access time, it simply takes the data out of the flash, and even there, a 0.145ms is 3 times higher then intel's G2 0.059 (anvil's picture above).
    maybe that is some decompression penalty, i really don't know.

    yet it doesn't lose any speed apparently (at least compared to other SSD's (not to itself)),
    maybe it could be tested?
    random >8MB read/write test?
    maybe you can even test the cache size this way, i would have a few tests just for the experiment.
    though maybe i'm confusing here something .

  17. #117
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    onex,

    It looks like you're talking about the C300? (which doesn't have compression)
    The C300 has an 128MB external cache, quite large compared to most SSDs.

    Both the C300 and the SandForce controllers are on the high side wrt access time vs the Intel SSDs, yet, they perform better for the most part.

    The Intel M series has an external 16MB-32MB cache (16MB for the G1 and 32MB for the G2) but it's not used for "user" data, it's supposedly used for housekeeping etc.
    -
    Hardware:

  18. #118
    Back from the Dead
    Join Date
    Oct 2007
    Location
    Stuttgart, Germany
    Posts
    6,602
    Hey guys,

    I finally got my 2x Vertex LE 50GB, so here are some benches for comparison - seems the 50GB ones don't loose to much to their bigger brothers.
    HW: X5680 (Hexacore) @ 4,5Ghz / 6GB DDR3-1800 C9 / GB EX58-Extreme Bios F12 / 2x Vertex LE 50GB FW 1.05
    Setup: 16KB stripe Raid 0 @ ICH10R, Write-back cache on to the left, WB off to the right.





    Gonna throw on the OS now... or would you recommend going with 128k stripe?

    cheers
    World Community Grid - come join a great team and help us fight for a better tomorrow![size=1]


  19. #119
    Xtreme Enthusiast
    Join Date
    Aug 2007
    Posts
    685
    Nice, what sort of pricing did you get them for?
    Tempted to get one (& maybe a 2nd later) instead of an x25m 80gb, or one of the other sandforce offerings.

    thank-you.

  20. #120
    Registered User
    Join Date
    Dec 2002
    Posts
    29
    Hey everyone, Im currently building a 4processor 48core Magne Cour machine for some heavy CFD simulations and was wondering what you guys would recommend for high speed storage. The simulation files are on the order of 5-10GB a piece and are written and read quite often. I zipped one of the files up and can get a 1:2 compression out of it. I already have 4TB of HDD storage but need some fast storage for the OS and the simulation to write to. I was thinking of getting 2xR0 100GB Vertex LE's but the 128gb C300's are looking really good also. Which SSD's are going to give me the best read/write performance for large, slightly (1:2) compressable files?

  21. #121
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    The c300's are a bit more $ vs. the LE but initial indications - based on work Anvil has done - makes me think c300's may be the fastest SSD available at this time.
    Price is steep but it is 256GB capacity.

  22. #122
    Banned
    Join Date
    May 2009
    Posts
    676
    i'd take the 100GB LE's, they're way of function with the compression and the technological advancement of SandForce really seems to be made out of quality,
    i'm yet to get the full numbers of the C300 in compare, yet from price consideration, you can get another 2 if you are bound with size for the same price,
    you will miss 100GB of space from the 2 C300 when getting 4 LE's, yet you will get much better speed,
    add a 250$ LSI 9210/11 HBA to the setup, and you will literally FLY.

  23. #123
    Registered User
    Join Date
    Dec 2002
    Posts
    29
    What about the 128mb version of the C300's? I believe those cost about the same as the LE's. How would 2R0 C300 128gb's compare to the 2R0 100gb LE's at large files reads and writes? (both on 3gb/s onboard controller)

  24. #124
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Quote Originally Posted by Natedog View Post
    What about the 128mb version of the C300's? I believe those cost about the same as the LE's. How would 2R0 C300 128gb's compare to the 2R0 100gb LE's at large files reads and writes? (both on 3gb/s onboard controller)
    That's a good queston for Mr Anvil he has both le and c300 256 - I would guess they might be pretty evenly matched - at least on paper.

  25. #125
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I'll soon find out about 2R0 LE vs 2R0 C300.

    I could tell you that 2R0 LE are the fastest drives I've ever tried but that would be unfair as I haven't tried the C300 array yet.

    I've got a feeling that it could be a close race, at least using 3Gb/s interface.

    I'll share my benchmarks and initial findings by this time tomorrow.

    @jcool

    Looks like the 50GB SF drives are performing quite nicely.

    Personally I prefer small stripes, initially I used my LEs on a 128KB stripe but they are now at a 16KB stripe.
    I'll be doing some tests this weekend using 4-8KB stripes on both the LEs and C300s.
    Last edited by Anvil; 04-28-2010 at 01:45 PM.
    -
    Hardware:

Page 5 of 14 FirstFirst ... 2345678 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •