Page 4 of 14 FirstFirst 1234567 ... LastLast
Results 76 to 100 of 348

Thread: Vertex LE vs Crucial C300

  1. #76
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Quote Originally Posted by onex View Post
    Gullars,
    the only thing that comes in between what u say and reality, is that as an idea, this all seem very nice (though it's hard seeing how a dual SF-1200 SSD with a ROC added to it and redesigned PCB, enclosure etc. will only cost 1.5 times a single SF based SSD) on paper, yet ususally, it's harder to implement and not always gets out "linearly"..
    though the idea, is indeed interesting.

    another thing which comes up, reading your post, is that the compression is (as far as i understand..), what gives u the 2x times speed, yet actually, you don't get the drive to perform at 270MBps (saturating the protocol)..
    the drive itself is manipulating data faster then a normal non-compress-using drive, though the badwidth sup[plied by the SF controller, is not 400MBps full duplex, but rather sending ~200MBps to the CPU and ~400MBps (by your calculation) to the NAND flash chips.

    i'm yet to have/see the full view/understanding on that drive so i might be missing here something,
    yet from a brief overview, so it seems.

    the OWC 50GB SF-1500 based drive costs ~230$, take off the 1500 and place a 1200 on it, take top 40-50$ off,
    u got a 190$ drive, double by 2 and add (you said 100$ ROC), so you end up with ~450 (+50 for any overhead),
    ~500$ for a 100GB drive.., let's say 550.

    that's not too bad if it would work as u say (double the ability of a LE 100GB and for a reasonable price add up).
    To the performance issue, the SSD can handle the RAW speed internally to and from the flash chips. Externally it can saturate the SATA interface if the compression is higher than {interface speed}/{raw speed}, wich is about 1,4x compression for reads, and 2-2,25x for writes. Meaning anything compressed by 1,4x or more will saturate the interface when read, and anything compressable by 2,25x or more will saturate the interface when writing. F.ex. Windows 7 and the MS office 2007 suite is compressable by more than 2x.


    When it comes to price of an internally 2R0 (2x RAID-0) SF-1200 drive, could make it as a 3,5" drive, with 2x 2,5" PCBs + one PCB for the ROC with 1x SATA/SAS 6Gbps connection externally and 2x SATA 3Gbps internal connectors to the 2,5" PCBs. So you pay for 2x 50GB SF-1200 + $100 for the 2port ROC only able to handle RAID-0 (not using cache) and the extra PCB it's on. I'll take your word for the OWC 50GB SF-1500 based drive costing ~230$. SF-1200 placed on a PCB whitout the supercap slot could also reduce the cost, but probably not by $50, let's say $25 to $200 pr SF-1200 on the 2,5" PCB whitout the 2,5" enclosure and cables and other stuff.
    2x $200 + $100 = ca $500 for 100GB. $500/$380 (agility 2 100GB) = 1,3.
    380*1,5 = $570.


    *drifting off in idealistic dreams*
    It would be nice if SandForce decided to make a PCIe SSD with the same sort of design for prosumers, but then take full use of the NAND for the RAW speed and not limit it to 200-220/120-130 on the larger capacities. By bumping the power in the controller to simelar levles of higher end RAID cards for the higher capacities, you could get several GB/s from a 200GB card for compressible data. Postulating linear scaling of raw performance from the 50GB, raw performance could be 800MB/s read (if 200 raw read is indeed limited by the physical max read of the NAND for the 50GB..),and 500MB/s write, and then be multiplied by compression when possible up to saturation of the PCIe interface. By not artificially write-limiting small block random, IOPS scaling could also be linear from the SATA drives using sandforce's architecture, making it capable of 4KB random write equal to the raw write rate of 500MB/s = 125K IOPS.
    SandForce's architecture is actually more suitable for SSD RAID than SSD controllers.
    It could also be nice if SandForce made a RAID controller dedicated for flash SSDs (or NAND Flash DIMMs?) taking compression one level higher and sequential steaming of random writes to the HBA. In that case with a powerfull processor you could be bandwidth limited by the PCIe interface for well compressible/compressed data, whitout the SSDs ever knowing the difference, working at their own pace with only (or mainly) sequential writes and both sequential and random reads.

    sigh
    so many floating thoughts, so little hope of it becoming reality.
    Last edited by GullLars; 04-22-2010 at 03:56 PM.

  2. #77
    Banned
    Join Date
    May 2009
    Posts
    676
    XP doesn't natively support the 4KB sector, all you'd get by aligning the drive is that the drive is aligned to its internal "structure".
    XP still won't "think" in 4KB sectors and thats the main difference from Vista, W7 and any other OS with native 4KB support.
    o.k, that's a good thing .

    now, gullars,
    the only thing i'm missing here is that the controller is capable of delivering ~220R to the CPU..
    what you are saying, is that if the compression is at 1:2, this brings the drive, to actually operate at 440MBps read...
    and that's why they were saying, windows 7 startup took ~1/2 the time to finish..

    now if i understand correctly, then the data is being decompressed back in the controller when it arrives from the NAND chips,
    the data is compressed, and because it's half of it's size, then the operating speed can be conceived, as double..

    now the data is being pulled out of the NAND chips, going to the controller, and being decompressed there,
    the data is, at it's same size again, going from the controller to the CPU, on the SATA cable, at 220MBps..

    so the only difference we got here, is at the controller-NAND level,
    now,
    if the data is remaining half of it's size, due to the compression,
    then we have double the SSD capacity actually (for 1/2 capable compressed data),
    so instead of having a 100GB LE drive, we actually got something in between 150GB and 200.
    so,
    i don't know, if we can take it all and half it when the compression is 1/2 (or x2 or however we call it),
    cause if we take the entire protocol, which means, from the controller to the CPU, it is acting the same as any other SSD (though might be a bit faster then some)
    the difference, is at the NAND-controller level where i'm lost .

  3. #78
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Quote Originally Posted by onex View Post
    now, gullars,
    the only thing i'm missing here is that {#1}the controller is capable of delivering ~220R to the CPU..
    what you are saying, is that {#2}if the compression is at 1:2, this brings the drive, to actually operate at 440MBps read...
    and {#3}that's why they were saying, windows 7 startup took ~1/2 the time to finish..

    now if i understand correctly, then the data is being decompressed back in the controller when it arrives from the NAND chips,
    the data is compressed, and because it's half of it's size, then the operating speed can be conceived, as double..

    now the data is being pulled out of the NAND chips, going to the controller, and {#4}being decompressed there,
    the data is, at it's same size again, going from the controller to the CPU, on the SATA cable, at 220MBps
    ..

    so the only difference we got here, is at the controller-NAND level,
    now,
    {#5}if the data is remaining half of it's size, due to the compression,
    then we have double the SSD capacity actually (for 1/2 capable compressed data),
    so instead of having a 100GB LE drive, we actually got something in between 150GB and 200.

    <...>
    the difference, is at the NAND-controller level where i'm lost .
    I'll respond point by point to clarify.

    #1. The controller is capable of reading 200-220MB/s and writing 120-130MB/s PHYSICAL SPEED FROM THE NAND, this is reffered to as RAW SPEED and is regardless of compression.
    If the data is compressed, it will take up less physical NAND space, and be written/read at a shorter time, but still at the same physical speed (compressed size divided by raw speed).
    The controller is capable of delivering 280MB/s read and 270MB/s write TO THE CPU over the SATA interface, saturating it, IF the data is compressible 1,4x for reads and 2-2,25x for writes. This means data compressible to more than this will leave the controller bottlenecked by the SATA 3Gbps interface. Keep in mind, this is sequential speed, not small block random, wich could be bottlenecked by the IOPS performance of the controller.

    #2. If the compression of a certain amount of data is 2:1 (2x), then the drive will read 400-440MB/s worth of data from the NAND (2x the raw speed), but only be able to transfer 280MB/s out of the drive.

    #3. They're not saying it took half the time, they say they logged # of LBAs (logic blocks) written to number of flash pages written, and found less than half the number of flash pages were written, meaning the data had been compressed on average more than 1:2.

    #4. After being decompressed, the data can be transfered at 200-280MB/s to the CPU, depending on how much it's compressed. 200 being not compressed at all, and 280 being 1,4x compressed or more.

    #5. The 100GB Drives have 128GB NAND on them 16 chips x 16GB. 28GB is reserved for RAISE (Redundant Array of Independent Silicon Elements. i.e. some internal redundancy) and wear leveling. I do not know how much goes to internal redundancy, but it can be from 1/16 of the drive up to 1/5, given the manufacturer reserved spare area. This means depending on implementation, it could be made to allow 1-3 NAND chips to fail whitout loosing data, and/or be implemented as parity for orders of magnitude better error correction.
    On top of the reserved spare area not used for redundancy, wich would be used for wear leveling, you also get dynamic spare area from 2 sources:
    *TRIMed unused LBAs, meaning if TRIM is active, any space the OS sees that has not files in it.
    *The area saved by compression. Meaning if the drive is 100% full, but has 1:2 average compression, you actually have 50% of the space as dynamic spare area usable for wear leveling. If the average compression was 1,1x, you would have 10% of the space as dynamic spare area.


    My argument for SandForce to make a SATA/SAS 6Gbps version of it's drive is so you would avoid the 280/270MB/s R/W bottlenecks imposed by the SATA 3Gbps interface. It may also require a bump in processing power to allow the controller to compress/decompress data faster so it wouldn't become the new bottleneck. Given the same RAW performance of the drive, a 6Gbps version would need 2,7-3x compression to saturate the interface for reads, and 4,6-5x compression to saturate the interface for writes.
    Even if this level of compression is unrealistic for the entire capacity on average, some workloads would get a huge bump in speed, and uncompressible data or hardly compressible data would be just the same.
    Since common system workloads likely would be compressible by than 1,4x in a lot of cases, the average speedup for reads could be quite noticable.
    To beat the C300 on average read performance, it would require 1,6-1,75x compression.


    EDIT: For scemantics reasons, i will clearify i made a mistake earlier, it should be 2:1 for compressed to half the size, not 1:2. It goes {raw size}:{compressed size}. Compressed to half the size is also 2x compression. 2:1 = 2x, 3:1 = 3x, etc.
    Last edited by GullLars; 04-23-2010 at 07:30 AM.

  4. #79
    Banned
    Join Date
    May 2009
    Posts
    676
    #2. If the compression of a certain amount of data is 2:1 (2x), then the drive will read 400-440MB/s worth of data from the NAND (2x the raw speed), but only be able to transfer 280MB/s out of the drive.
    o.k, 1:
    the fact is that the drive is performing at 220MBps READ, i.e - the compressible scheme could be working at 440MBps or whatever, yet the controller is delivering data to the MB at 220MBps.
    which means:
    1 - the controller is 220MBps limited @ the stock ICH/SB voltage.

    RAISE, seems to be sort of what you have said,
    the spare memory chips enabled by the compression, allows the drive to operate like RAID 5 configuration,
    enabling much better error correction, both at the cell level and controller level.
    the other side effect by SF made by using compression is writing less to the flash, i.e - allowing the usage of lower quality memory chips for gaining better price for the controller (by anadtech).

    the only thing left asking now is,
    whether the company really WANTS to give more power to it's controller, assuming it won't harm it life time or the memory cells..
    i'm 100% positive, they are aware they're controller might reach higher speeds, yet i just think, they don't really want to do it ....

    the compression argument, could be taken both sides,
    the controller itself, is sending data @220MBps,
    the compression ratio, is irrelevant here.
    even if the compression ratio would be x15, still, it would take the controller precious time to decompress it,
    it is not certain even if SF can implement a better then x2 compression ratio on the 1200/1500..
    what you are saying actually, is:
    if the compression is x4, then the controller has 1/4 of data to write,
    so, it can do it much faster..

    if we take 220MB of information which goes at about 1 sec, from the NAND chips, to the CPU,
    then that 220MBps is already calculating 2 factors in it,
    1: the controller ability to pass data to the CPU,
    2:the controller ability to fetch data from the NAND and decompress it.

    now in order to figure out better what you are planning, we need to figure out more parameters.
    1:if a x4 compression is at all possible with the current controller, or rather, there would have to be a new design.. (if at all SF engineers know a way to compress data further from x2).
    2:what is the latency hit..
    3:the controller would have to be much faster to unravel data compressed @x4 ratio + delivering this data to the CPU at faster then 220MBps speed.

    what i'm saying basically is SF, is probably working on new generation of controllers,
    2nd, if they gave the 1200 working 220MBps and not 250MBps, it's probably not because they didn't want to..
    it's because currently, they couldn't have supplied such bandwidth.

    i'm still a bit missing you on that 280MBps (where did you bring it from )
    hopes this clears out a bit of confusion or unclear thoughts.
    Last edited by onex; 04-23-2010 at 08:55 AM.

  5. #80
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    onex,
    wrt 280MBps

    This is what the LE is capable of
    1XLE_0_FILL.JPG

    Pretty close to 280MBps
    -
    Hardware:

  6. #81
    Banned
    Join Date
    May 2009
    Posts
    676
    E:
    oh,
    i've just seen the first page of the thread ,
    so what's the 120MBps talk all about ?

    anyhow,
    the last post still stands for the ability to create a x4 compression, that might be unattainable,

    well,
    indeed, that would be interesting checking out this PCB on the SATA3 protocol .
    i'm wondering if they havn't made any tests or they are preparing the 'next generation LE' .
    Last edited by onex; 04-23-2010 at 10:30 AM.

  7. #82
    Banned
    Join Date
    May 2009
    Posts
    676
    The controller is capable of reading 200-220MB/s and writing 120-130MB/s PHYSICAL SPEED FROM THE NAND
    p.s - where did you take that from..?

    anvil's results, showed top, 250/260 R/W.

  8. #83
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    I'm starting to suspect you are trolling onex... I've now said 3 times:
    THE DRIVE IS CAPABLE OF SATURATING THE SATA INTERFACE when data is compressible enough (the speeds anvil just posted above).
    Since the controller uses compression, the external and internal speed will be different for compressible data. The INTERNAL speed is ca 200-220MB/s read, 120-130MB/s write. This is shown when working on data that is not compressible, like AS SSD on one of the earlier pages, or CrystalDiskMark 3.0 with the default data pattern (not 0x00).
    For data that is compressed, the controller gets less data to write to NAND and read back again, but still reads and writes at 200-220MB/s read and 120-130MB/s write internally to and from the NAND chips.

    Let's say you have a 1000MB file that is compressible to 200MB. When this is sent from the system to the SSD, 1000MB is transfered over the SATA interface at 270MB/s, enters the SSD controller, is compressed to 1/5 of it's original size, and is written to NAND flash at 270/5 = 54MB/s. When the same file is requested back by the system, data is read from the NAND at 56MB/s, decompressed by 5x in the controller, and transfered over the SATA interface at 280MB/s. The first few blocks are probably read from NAND at 200MB/s and decompressed untill the cache is full, and then transfered at 280MB/s while the rest of the blocks are read at 56MB/s as space in the cache becomes avalible.
    For such a file as mentioned above, it will take 3,7 seconds to write it, and 3,6 seconds to read it. If the SandForce drive had used a 6Gbps interface instead, it would take 1,7-1,8 seconds to write it, and 1,7-1,8 sec to read it back.

    I will also point out, data compression ratio depends mostly on the files, and to a lesser extent on the compression method. Just like WinZip or WinRAR, etc. the SandForce controllers can compress data much more than 2-5x if the data is easily compressible, there is no artificial limitation at work there.

  9. #84
    Banned
    Join Date
    May 2009
    Posts
    676
    o.k, now i get it!!
    i see i see,
    that's what you were talking about, benching software which are compressing/not-compressing data!
    CDM & AS-SSD, and the different compression from the Tera-Copy test!
    i wasn't getting it before..
    you guys are talking about things which are obvious to you as you handle them for years, but for others who just heard about SSD few month's ago, and use computers for relatively a very short time, it isn't so .

    i'll try getting further into your post later on .
    Last edited by onex; 04-24-2010 at 05:30 AM.

  10. #85
    Banned
    Join Date
    May 2009
    Posts
    676
    So 264MB/s seems to be around the max interface speed for the setup you have connected the LEs now, since 0x00 should be extremely compressable. If the drives had come with SATA 6Gbps, it probably would also max that interface for writing 0x00.
    Can anyone here with more programming and systems experience than me tell me if it's common to have some percentage of a drive or data written as all zeroes or all ones? (as in pages/clusters, or larger blocks)
    If so, making the controller reckognise pages of all zero or all ones and not write them at all but simply list them in the LBA table with a bit (or a few bits) of metadata indicating this would allow for freeing up space, reducing overhead, and always saturating the interface for such chunks of data.
    p.s -
    how would that be possible?
    SSD afaik, write 0 to a cell when it is activated, and 1, is the "erased state".
    now take any simple data pattern such as a single letter, A, which is 41 ASCII, and 00101001 binary,
    if you take a txt file filled with letters, where every letter stands for a single byte of data, you would get a mix of arbitrary-looking 1 and 0.
    you talk about a form of compression, or sort of LBA filtering, which could be possible generally, yet i doubt if it commonly happens.
    it is very hard to predict data on a block level (512KB) as doubtfully if there are 2 identical blocks on an SSD or an HDD which are not taken from the same double files.
    it would be easy this way..
    i'm thinking, what they are doing, is taking measured stream of data and apply an hardware algorithm on it, they can hash it (yet hashing would probably take time? and much less space..(not sure).

    E:
    opened a 12KB crackme in ollydbg,
    apparently, there were 2.735 KB of data filled with 00,
    and that's only at the CODE section.

    another, was POP executable,
    a 12.3MB file, where 1.7MB accounts for the DATA section, where ~630KB of data is all 00.
    apparently the compiler is allocating a chunk of space for the file when not all of it is being used,
    some of the free data could be reserved for debugging too.
    Last edited by onex; 04-24-2010 at 06:50 AM.

  11. #86
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Somehow they apply vairous algorithms depending an what datatype/stream it detects.

    Most executables/dll's/assemblies easily compress to less than half the original size.
    -
    Hardware:

  12. #87
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Onex, if a file is filled with portions of just 000000000000, the compression algorithm could just list wich Dword/Qword/clusters are 000000000, and have those portions take up less than 1% of the original space.
    Anand did an example of this by comressing a large IOmeter testfile in the order of 100's of MBs to 100's of KBs.

  13. #88
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Just an update on the C300 performance.

    I've been using it now for 3 weeks on my laptop, yesterday I changed back to the Intel G2, not because of performance issues but I'm trying to find out which one is snappier.
    So, I redid a few tests on the C300 and there is no performance drop, in fact the AS SSD Benchmark score increased by a few points.

    as-ssd-bench C300-CTFDDAC256M 25.04.2010 12-47-18.png

    I'll probably stay on the Intel until the new firmware is released for the C300.
    (it is promised within the first week of May)
    -
    Hardware:

  14. #89
    Banned
    Join Date
    May 2009
    Posts
    676
    Onex, if a file is filled with portions of just 000000000000, the compression algorithm could just list wich Dword/Qword/clusters are 000000000, and have those portions take up less than 1% of the original space.
    Anand did an example of this by comressing a large IOmeter testfile in the order of 100's of MBs to 100's of KBs.
    yeah, he also said, it isn't accurate testing the LE with that sort of compression as the drive should give higher results (using that uber compressible IOMeter file),
    he added that there's an updated IOMeter coming out where there is an option for the bench file to use only random data which is literally (a 161MB file tested, actually got few KB bigger when trying to compress it) incompressible, and so gives more accurate results,
    i.e - at testing worse case scenario for the specific SSD.

    p.s -
    I redid a few tests on the C300 and there is no performance drop, in fact the AS SSD Benchmark score increased by a few points.
    strange .
    maybe the drive works better under load, or as the intel controller, learns to manipulate data better with spare chips/space etc.
    Last edited by onex; 04-25-2010 at 11:32 AM.

  15. #90
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    ok, so I think the c300 is faster than the acard 9010 - perhaps in all but 4k reads at low q depth -

    this is single acard - box set as single drive -



    this is single acard at 4.8 oc pcie 115 -



    this is an acard "box" set up as 2xR0 on ich10, from left to right - no oc, oc 4.8/pcie 115, and lastly dynamic disk stripe using ich10 -


  16. #91
    RAIDer
    Join Date
    Jul 2005
    Location
    Norway
    Posts
    699
    c300 have to be benched with pci-e @ 115mhz or not it is no good comparsion I think. pci-e frequence do some performancegain.

  17. #92
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Yep, the 1st and 3rd down on the left are both no oc, pcie at 100.
    Compared to Anvil's score of 658 above, I can only beat that with dynamic disk 3rd down, far right.
    based on AS SSD, looks like c300 is faster than acards in all but 4k reads at low qdepth.

  18. #93
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I found some time and tried the C300 on the 9260

    Your 4K QD1 is hard to beat.

    as-ssd-bench LSI MR9260-8i SC 25.04.2010 19-10-14.png

    cdm3b_dio_nra_wt_dce_16kb_ss.JPG cdm3b_dio_nra_wbc_dce_64kb_ss.JPG
    The difference between these two are write through on the left and write back to the right

    iometer 4KB random read 4KB aligned
    rr_4kb_4kb_aligned.JPG

    iometer 4KB random write 4KB aligned
    rw_4kb_4kb_aligned.JPG

    edit:
    iometer runs are at QD64
    Last edited by Anvil; 04-25-2010 at 03:36 PM.
    -
    Hardware:

  19. #94
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Anvil, does the 128 maintain the performance of ther 256GB?
    I would be tempted if they made a 64 that maintained the performance.

  20. #95
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Steve,

    No, the 128GB is a bit slower on sequential writes and iops.

    128GB : Sequential Write (up to) 140MB/sec
    256GB : Sequential Write (up to) 215MB/sec

    128GB : Random 4k READ 50,000 IOPS vs 60,000 IOPS for the 256GB. (-10000 iops)
    128GB : Random 4k WRITE 30,000 IOPS vs 45,000 IOPS for the 256GB. (-15000 iops)

    I did find a link some time ago listing a C300 64GB but I can't seem to find it. I guess the 64GB edition would perform just like the 128GB edition.

    edit:
    The C300 64GB is listed with 70MB/s sequential write, don't know about iops but based on this it might be (s)lower than the 128GB.
    Sequential read is listed as 355MB/s for all models in the C300 series.
    Last edited by Anvil; 04-26-2010 at 02:01 AM.
    -
    Hardware:

  21. #96
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    I would really like to see some benches of 1231ml-4g with 4xC300-256!

  22. #97
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838


    So would I.

    I've ordered one more of the 256GB C300's, I just can't stand not knowing how it performs in an array.

    It doesn't look like it's degrading while TRIM is active but it could be a change for the worse using raid.
    -
    Hardware:

  23. #98
    I am Xtreme
    Join Date
    Nov 2008
    Location
    Warrenton, VA
    Posts
    3,029
    Great - can't wait!
    Looking forward to seeing 2x C300-256 on both ich10 and 9260 - results should be spectacular!

  24. #99
    Banned
    Join Date
    May 2009
    Posts
    676
    128GB : Random 4k READ 50,000 IOPS vs 60,000 IOPS for the 256GB. (-10000 iops)
    AJA, up to 60k 4kb read IOP's!!!?
    that is insane!

    few notes,
    1. 64QD is unnecessary using probably any SSD for a normal PC work load,
    if it has any meaning at all, it is just for testing the operation of the drive/controller.

    2. how come the 4kb read is slower then the write on AS benchmark?

    3. 10k IOP's 4kb read (at AS) is less then the G2 16k IOP's (IOMeter), yet seems (picture bottom), G2 give only 5k.

    4. the write back cache, seems to give a very nice benefit over the write through, except from the 32QD which the write through gives (seemingly logical) a decent improvement.
    this should be checked maybe on 8-10 normal load QD through IOMeter to watch the hi-load performance loss.

    5. 3/4 of a second write access time?!?

    that's a 160GB G2 taken from legit-reviews


    steve -
    the ACARD is showing ~stable results (exc. from the read-write seq difference), and LESS then 1/2 ms access time,
    for comparison, Computerd 9260 (or was it the 9211), showed min of 0.13-0.12 with the new firmware on the 7 (or 8?)*vetexs.

  25. #100
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    onex, disregard that AS SSD screenshot from legit-reviews. It's done using IDE mode, and says so in the picture. This is evident in the 4K-64Thrd numbers, wich don't scale. In AHCI or RAID mode, an x25-M G2 160GB can do 140-160MB/s read at 4K-64Thrd, and up to 80MB/s write, typical around 60MB/s. Sequential write should also be around 100MB/s, not 80.

    In reply to point 4 you listed above, we (me, and anvil) have already researched this, and found write-through give a significant IOPS boost (through lower latency overhead), while resulting in lower max bandwidth.
    EDIT: graphs
    EDIT2: to clarify the graphs. The SSDs are Intel x25-M G1, G2, and x25-V. The number of SSDs are listed first, then M/V, then G1/G2, then capacity, then write-back (WB) or write-through (WT), and lastly IOmeter file size.
    The Y-axis is bandwidth in MB/s, and the X-axis is Queue Depth.
    EDIT3: The controller used here is Anvils LSI 9260-8i. Firmware from january-february i think. So i think that makes the second firmware released.
    EDIT4: I have all the graphs in much higher resolution, and the xlsx document with all the raw data, if anybody wants the excel file, i'll post it.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	LSI 9260 random read 512b-4KB various setups.png 
Views:	467 
Size:	144.8 KB 
ID:	103549   Click image for larger version. 

Name:	LSI 9260 random read 8-64KB various setups.png 
Views:	466 
Size:	117.7 KB 
ID:	103550  
    Last edited by GullLars; 04-26-2010 at 09:54 AM.

Page 4 of 14 FirstFirst 1234567 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •