Page 2 of 3 FirstFirst 123 LastLast
Results 26 to 50 of 59

Thread: Thought experiment: HDD with flash read-cache?

  1. #26
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Then you could call all HDDs hybird platter/RAM since they use 8-64MB RAM cache...
    One other point against calling this type of HDD with flash read-cache is ALL data is stored on spinning platters, on a hybrid drive you would have some data stored on flash that's not stored on HDD. That's at least what i think of as a requirement for naming it hybrid.
    Last edited by GullLars; 04-18-2010 at 01:37 PM. Reason: typo

  2. #27
    Banned
    Join Date
    May 2009
    Posts
    676
    nvm the naming now,
    we're talking about NAND-flash right?
    then u don't need a BBU with it and data remain on the cache would still be there in any sudden shutdown.

  3. #28
    Banned
    Join Date
    May 2009
    Posts
    676
    Seagate seem to have stole your idea.
    http://www.fudzilla.com/content/view/18863/1/
    Last edited by onex; 05-19-2010 at 08:22 AM.

  4. #29
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    If you think about how long it takes to develop, i seriously doubt that :P

    I'd like to see 3,5" with 25nm MLC ONFI 2.2 NAND, around 8-16GB of flash for added 20-40$

  5. #30
    Banned
    Join Date
    May 2009
    Posts
    676
    no ,
    of course, was just kidding,
    yet your innovation, seems to have been going through seagate (and maybe other companies to come?) engineers heads !

    as for the drives,
    4GB SLC for a start, isn't that bad..
    lets wait and see how well it operates compared to the normal ones and the velociraptor or even the SAS's,
    if it would take them,
    that would be a real winner ,
    waiting to see also if they used the cache to hold the rotor from spinning - the power consumption for the enterprise uses!

    E:
    maybe you are right,
    maybe an 8GB module would be better,
    yet we would have to wait for benches and reviews,
    large cache sometimes, can opposite it's original meaning..
    Last edited by onex; 05-19-2010 at 11:10 AM.

  6. #31
    Xtreme Cruncher
    Join Date
    Oct 2008
    Location
    Chicago, IL
    Posts
    840
    I think that the r&D don't make it conceivable. By the time you get the product made and marketed SSD will be so cheap you won't be able to recover your R&D losses for years, if ever. Unfortunately the product wont be $20-30 more for the hard drive. It would be much more expensive because the firmware has to be designed and such. Overall, I think that the reason we don't see it is that hybrids were on the horizon before SSD really started hitting the market. As soon as that happened the projected profit margin dried up. Remember those ANS-9010 and ANS-9010B(The RAM drives). I have 4 of them. It is completely inconceivable to buy them today. The cost to make a 32GB drive is over $1000. Intel SSDs can pretty much match it/beat it on benchmarks, and 32GB is nowhere near that kind of price.

    Also, I don't know if it really would improve performance. Having a 4GB cache wouldn't be much better than having a 4GB system cache like Vista/7/Linux use. So while benchmarks for that small portion of the drive would appear faster, real world performance might not improve at all. Also, what market segment would you be looking for. Based on today's prices there's really only 3 markets for hard drive space. Performance, Size, and standard user. The performance group wants only the fastest. Those that want size won't see an appreciable performance increases by adding an 8GB cache to a 2000GB+ hard drive(caching .001% of a drive isn't going to REALLY matter). Standard users won't really care about that cache thing and save their money and just buy that 320GB drive they need even though they'll use 100GB at the most.

    So yeah, I think that companies have figured the R&D on a product like this is a dead end.

    Edit: Don't get me wrong. I think it would have been interesting to see what kind of caching algorithms would have been invented to sustain this type of product.
    Last edited by josh1980; 05-19-2010 at 08:51 PM.

  7. #32
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    8GB/2000GB=0,004, so it could cache 0,4% of the drive, not 0,001%.
    Caching 0,5% of a drive COULD matter if those 0,5% were small files read often and in a random pattern, like you'd see on a system drive.
    Bump it up to 16GB on a 2000GB drive, and you can cache almost 1%.

    For a caching algorithm, you could make it as simple as tracking LBA read and write frequency. LBAs read the most in chucks smaller than 32-128KB would get priority, and also R:W ratio 3:1 or higher as to not wear out the flash.
    Since you could write everything in 512KB chucks, and 8-16GB is doable on 1-2 NAND dies (25nm 2-bit MLC), the controller part handeling flash wouldn't need to be that complex.

  8. #33
    Banned
    Join Date
    May 2009
    Posts
    676
    you got some thoughtful points josh, yet don't forget, prices for a 500GB hybrid drive are about 80$ less from an 80GB X25-m.
    with flash prices going down, lars point is coming into the equation where 16GB module can be places with maybe just a small increase in price.

    seagate must have used SLC in they're drives in order to prevent fast decay of the NAND.

    now 16GB or even 32GB for larger drives could add some speed to a drive, lower power consumption and sustain a longer drive life.

    i think this move is more important for enterprise uses, raid arrays, where system ram caching require a lot of memory which most of the time is being used for virtualization.

    bottom line is we will have to wait for benches, to see whether this is something relevant or just a marketing buzz/hype.

  9. #34
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by onex View Post
    Seagate seem to have stole your idea.
    The cat is out the bag:

    http://www.anandtech.com/show/3734/s...ood-hybrid-hdd

  10. #35
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Seagate used a 4GB SLC chip as read-only cache. Exactly what i ranted about, but lower capacity and more expencive flash.
    SLC is generaly 2,5x more expencive than MLC pr GB. You could make an 8GB MLC cache at a lower cost, and with only about 20-25% performance impact on both read IOPS and read bandwidth from the chip.
    Referance for numbers i will be listing: http://onfi.org/wp-content/uploads/2...bottleneck.pdf
    A 8GB MLC package using a 2-plane single die configuration with ONFI 2.1 specs could do about 7500 IOPS and 80-90MB/s seq read. Using 2 such dies in a single package on a single ONFI 2.1 buss would give 16GB, 7500 IOPS and 160-180MB/s seq read. At roughly $2/GB for MLC, $32 for the NAND and say $20 for a single channel simple ONFI 2.1 flash controller, you could get 7500 IOPS and 160-180MB/s for cache hits with a $50 price increase.
    IMHO, WD should add such a flash read-cache to their next "Black" drives they launch (and VR if they launch more of them). With a simple LBA read pattern tracking algorithm, R&D shouldn't be that big. Scorpio Black could also benefit greatly from such a 16GB MLC read-cache.
    Keep in mind, the 160-180MB/s read from the cache can be done in parallell with max read/write speed to the platters, wich is around 90-100MB/s nowadays for 7200RPM 2,5" drives, so you could get 260-280MB/s peak bandwidth

  11. #36
    Banned
    Join Date
    May 2009
    Posts
    676
    pff,
    no,
    you won't get 260MBps write bandwidth from such drive,
    you have constrains that are not 1 to 1.
    if you hook up an SLC and a platter, you would probably suffer from some latency and/or delays,
    it is never 1+1 = 2.
    like a rookie would say HT would get a bloomfield to operate at 200%.
    it doesn't, and it would probably take some time for it to be (and even then probably not 100).
    -without being picky on few MBps, yet it is a point which is importaant to keep in mind-.

    the engineers over at seagate must have had a good reason to place an SLC cache instead of MLC even though the prices for MLC are higher,
    it could be that the reason is, the cache doesn't include all features current SSD controllers has (with trim, GC, write amplification) and so it's life expectancy drops,
    it might not even include spare area (though that is quite doubtful), yet, such drive with MLC might lose it's performance after a relatively short usage.

    manufacturers are working currently on 100,000 cycle MLC (or have made it all ready), so this idea might be more sensible in time to come.
    for two dies, there could be a different controller which can handle both of them in a sort of RAID array,
    Crucial C300 are using 2xARM A9 by anandtech,
    arm controllers in general should not be too expensive (yet as for the A9's, this is not checked).

    placing a SF/marvell/intel/whatever controller in such drive would probably make it much more expensive.
    even though the Crucial drive isn't any cheaper.

    as for the 16GB cache, it seems a bit too large at the moment, it is almost like creating an SSD out of the drive,
    the current momentus, is just a hybrid drive, it is meant for casual users, and not in general for performance,
    they can also create a 10K version that would function as the VR, maybe even creat it with smaller platters (like short stroking) for extra speed at the expense of some storage capacity..
    they can place a dual 4GB RAM module with a small battery,
    they can do many things...

    as much as intel can create a 48 cores chip..
    yet they are not interested probably,
    if they created these drives, with SLC, they must be aware at what they are doing,
    they are probably are also aware that MLC function this way and SLC the other in relation to the number of planes and links per die as the Micron PDF shows..

    this is they're job, and they know at least a bit of what they are doing,
    if there was a guy working at seagate coming to the forum and saying that this is the speediest solution, it could be, that it could be argued...

    as yet as long as you don't have a lab and you are testing these things, it is like saying, intel should bloat it's L1 and L2's and add 2 QPI's links, enlarge they're processor to double the die size, in order to place more cores in it etc. etc.,
    yes,
    it would be speedier, yes, it could be possible that it would work,
    yet this is just simple math,
    as you probably know already,
    it is never linear with computers, everything makes a difference,
    a certain controller could work this way, and the other the other,
    a company has to test everything from every angle and aspect before releasing it..,
    telling them,
    "look, you should do this and that", is superflous, with no offense my friend,
    as users,
    we generally don't know even 30% of what they are doing...

  12. #37
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Quote Originally Posted by onex View Post
    pff,
    no,
    you won't get 260MBps write bandwidth from such drive,
    you have constrains that are not 1 to 1.
    Pleese quote me where i said 260MB/s write bandwidth.
    I said 160-180MB/s sequential read from flash, + 90-100MB/s platter throughput (either read or write) = 250-280MB/s total throughput. That may be up to 100MB/s write and 180MB/s read in mixed sequential read/write where reads are purely from cache and write go to platter. You can also get 100MB/s read from platter + 180MB/s read from cache at the same time.

    if you hook up an SLC and a platter, you would probably suffer from some latency and/or delays,
    it is never 1+1 = 2.
    Yes, there will be some latency from checking the cache table, likely in the order of single to low double digit µs. That's significantly lower than flash accesstime.


    the engineers over at seagate must have had a good reason to place an SLC cache instead of MLC even though the prices for MLC are higher,
    MLC is cheaper, SLC is more expensive.
    The reasons for using SLC are:
    10x more write cycles, wich allows way more aggressive caching. Usefull if hotfiles are overwritten often or take up much more space than the cache and must be cycled.
    20-25% higher read performance, both accesstime and bandwidth
    Generally higher binned parts.

    it could be that the reason is, the cache doesn't include all features current SSD controllers has (with trim, GC, write amplification) and so it's life expectancy drops,
    it might not even include spare area (though that is quite doubtful), yet, such drive with MLC might lose it's performance after a relatively short usage.
    <...>
    for two dies, there could be a different controller which can handle both of them in a sort of RAID array
    No need for any such things in a read-only cache. The controller can write to it in MB size sequential blocks if it wants to, a basic wear leveling algorithm will do. Write performance will not be very important for such a cache.
    Two dies in one TSOP (typical flash package) on the same Flash bus does not require any form for RAID, the ONFI specification allows interleaving. Making a controller able to handle <180MB/s, minimal ECC and wear leveling, no random writes, and only one channel should not be a huge undertaking and does not warrant an expencive or complex controller. You could almost make a more powerfull memory-stick controller.

    as for the 16GB cache, it seems a bit too large at the moment, it is almost like creating an SSD out of the drive
    16GB cache on a 500GB drive makes 3,2% of total drive volume. Meaning you could cache about 3% of the drive. 4GB is 0,8% of the drive.
    There is a significant benefit to be able to cache 3% instead of 0,8%. Since you can fit a lot more in the cache, you don't need to be so aggressive with replacing data, and it's great if you have a larger set of hotfiles.

    as much as intel can create a 48 cores chip..
    yet they are not interested probably,
    if they created these drives, with SLC, they must be aware at what they are doing,
    <rant>
    we generally don't know even 30% of what they are doing...
    I don't see what this part has to do with anything, but i thought i'd reply to it too. The basic concepts of making a NAND flash readcache, and the implications of size and SLC vs MLC can in no way be compared to the other stuff in this part of your post. The idea and how it works is pretty straight forward if you are familiar with read caches on RAID controllers, and the concept of memory tiering.

  13. #38
    Banned
    Join Date
    May 2009
    Posts
    676
    sorry,
    been confusing cache and buffer operation,
    SLC of course cost much more then MLC and you 260MBps write is overestimated.

    Pleese quote me where i said 260MB/s write bandwidth.
    the write numbers aren't that important, the important part is that you seem to take things obvious and as if they are easy to implement,
    there is a possibility that manufacturers havn't thought about this idea (doubtfully), yet as a fact,
    this cache HDD build have passed through they're minds at least few months or even a year back.

    Yes, there will be some latency from checking the cache table, likely in the order of single to low double digit µs. That's significantly lower than flash accesstime.
    yes,
    MLC and SLC should not have any difference from the controller aspect, latency should be less and so it's way of operation with it's constrains, yet,
    it easy to give idea's (not that it is bad), yet without actually testing them, these are only words which is not much..,
    there is no word on seagate trying MLC for they're hybrid drive (in the end, it is called hybrid..:gotcha and if they havn't tried it and rejected it, and so it's hard to tell the overall function of the drive,
    i think your approach is saying "they don't know what they are doing" which cannot be said without testing.

    10x more write cycles, wich allows way more aggressive caching.
    o.k, said that...,
    20-25% higher read performance, both accesstime and bandwidth
    o.k,
    we just don't know if it is the performance which made them decide on SLC for the cache function,
    could be both,
    if it is performance, then MLC could be not fast enough,
    i think it has to do with the write cycle and the performance is just a beneficial adder to that..,
    but really, i don't know.

    Two dies in one TSOP (typical flash package) on the same Flash bus does not require any form for RAID,
    talking two dies, separate,
    it could be asking for a different controller then they are currently using,
    yet, i don't know how many channels they're controller can take and why would they be using a dual channel controller for a single channel operation.
    if there are any single channel controllers at all and they are not coming standard with few channel option...

    aside from that,
    you say they work with internal write buffer so the cache is written too much less (bigger blocks),
    this also includes write amplification, and 4GB is a small area to manipulate,
    not much spare area left, -must be some spare pool for bad sectors replacement-
    the cache should be written to more times then any disk, any frequent read operation is being written, so probably that is the necessity for SLC.
    16GB cache on a 500GB drive makes 3,2% of total drive volume. Meaning you could cache about 3% of the drive. 4GB is 0,8% of the drive.
    There is a significant benefit to be able to cache 3% instead of 0,8%. Since you can fit a lot more in the cache, you don't need to be so aggressive with replacing data, and it's great if you have a larger set of hotfiles.
    yeah, no doubt it relieve some issues yet it would make it more vulnerable to decay,
    MLC is still MLC, and as long as they do not take 100K write cycles, placing it withing a read cache is not wise.
    these drives should work up to 3 years and maybe more for enterprise.

    I don't see what this part has to do with anything,
    this part is saying, you are bringing up an idea, in a way, that is saying, "hey, you missed that spot",
    well, how do you know?
    how do you know they havn't been thinking about that?
    you come self assure and seem even to underestimate these engineers understanding.
    it is not a rant at all, just explaining,
    it is the same as an enthusiast would come at the intel CPU forum and say "why don't you add more cache? why don't you add more die space? etc. etc."
    i'm saying,
    these guys know what they are doing, and part of they're decisions lay on the market adoption of they're product, the market financial status, they're competition with other companies, the life longevity of they're products,
    a road map for future drive and development,
    not just to make some enthusiast happy.
    we generally don't know even 30% of what they are doing....

  14. #39
    Banned
    Join Date
    May 2009
    Posts
    676
    sorry,
    been confusing cache and buffer operation,
    SLC of course cost much more then MLC and you 260MBps write is overestimated.

    Pleese quote me where i said 260MB/s write bandwidth.
    the write numbers aren't that important, the important part is that you seem to take things obvious and as if they are easy to implement,
    there is a possibility that manufacturers havn't thought about this idea (doubtfully), yet as a fact,
    this cache HDD build have passed through they're minds at least few months or even a year back.

    Yes, there will be some latency from checking the cache table, likely in the order of single to low double digit µs. That's significantly lower than flash accesstime.
    yes,
    MLC and SLC should not have any difference from the controller aspect, latency should be less and so it's way of operation with it's constrains, yet,
    it easy to give idea's (not that it is bad), yet without actually testing them, these are only words which is not much..,
    there is no word on seagate trying MLC for they're hybrid drive (in the end, it is called hybrid..:gotcha: ) and if they havn't tried it and rejected it, and so it's hard to tell the overall function of the drive,
    i think your approach is saying "they don't know what they are doing" which cannot be said without testing.

    10x more write cycles, wich allows way more aggressive caching.
    o.k, said that...,
    20-25% higher read performance, both accesstime and bandwidth
    o.k,
    we just don't know if it is the performance which made them decide on SLC for the cache function,
    could be both,
    if it is performance, then MLC could be not fast enough,
    i think it has to do with the write cycle and the performance is just a beneficial adder to that..,
    but really, i don't know.

    Two dies in one TSOP (typical flash package) on the same Flash bus does not require any form for RAID,
    talking two dies, separate,
    it could be asking for a different controller then they are currently using,
    yet, i don't know how many channels they're controller can take and why would they be using a dual channel controller for a single channel operation.
    if there are any single channel controllers at all and they are not coming standard with few channel option...

    aside from that,
    you say they work with internal write buffer so the cache isn't been written too much less (bigger blocks),
    this also includes write amplification, and 4GB is a small area to manipulate,
    not much spare area left, -must be some spare pool for bad sectors replacement-
    the cache should be written to more times then any disk, any frequent read operation is being written, so probably that is the necessity for SLC.
    16GB cache on a 500GB drive makes 3,2% of total drive volume. Meaning you could cache about 3% of the drive. 4GB is 0,8% of the drive.
    There is a significant benefit to be able to cache 3% instead of 0,8%. Since you can fit a lot more in the cache, you don't need to be so aggressive with replacing data, and it's great if you have a larger set of hotfiles.
    yeah, no doubt it relieve some issues yet it would make it more vulnerable to decay,
    MLC is still MLC, and as long as they do not take 100K write cycles, placing it withing a read cache is not wise.
    these drives should work up to 3 years and maybe more for enterprise.

    I don't see what this part has to do with anything,
    this part is saying, you are bringing up an idea, in a way, that is saying, "hey, you missed that spot",
    well, how do you know?
    how do you know they havn't been thinking about that?
    you come self assure and seem even to underestimate these engineers understanding.
    it is not a rant at all, just explaining,
    it is the same as an enthusiast would come at the intel CPU forum and say "why don't you add more cache? why don't you add more die space? etc. etc."
    i'm saying,
    these guys know what they are doing, and part of they're decisions lay on the market adoption of they're product, the market financial status, they're competition with other companies, the life longevity of they're products,
    a road map for future drive and development,
    not just to make some enthusiast happy.
    i'm saying,
    we generally don't know even 30% of what they are doing...
    Last edited by onex; 05-24-2010 at 05:16 PM.

  15. #40
    Registered User
    Join Date
    Jan 2008
    Posts
    29
    Hey, that Seagate Momentus XT 500GB is looking good right about now! An affordable alternative to pure SSD has arrived. A good looking upgrade from a pair of 7200.10 Barracuda's.

  16. #41
    Xtreme Addict
    Join Date
    Jun 2007
    Posts
    2,064
    Quote Originally Posted by GullLars View Post
    So, lately i've been skimming through a few old articles and papers on HDD/RAM and newer Flash/RAM hybrid drives. After looking at the specs and contemplating architecture a bit, i was left with a question:
    Why hasn't any HDD maker yet added a flash read-cache?

    By adding a single 4-8GB MLC NAND chip, costing roughly $2-3/GB = $8-24 added cost, and using it for read-caching hot-files, you can get around 4-5000 4KB random read IOPS = 16-20MB/s (@QD 1) and roughly 40-60MB/s sequential read for the cached data. And since it's a read-cache, write speeds and write cycles will be largely irrelevant.

    Tracking hot-files should be easy to implement simply by logging read-access to LBAs, and with a slight bit more effort, filtering LBAs being read in a small block random pattern. Possibly also caching file-table and folder/file structure and metadata, as well as the data typically read the first seconds after power-up or spin-up. This could allow low-power "green" drives to not spin up everytime you access it if you don't need un-cached data, and could also allow the drive to spin at a lower speed while delivering adequite performance. Lower rotational speed could also allow higher storage density.

    Using this type of caching would have a noticable effect on typical usage patterns, and especially multi-tasking, but would likely not make an impact on benchmarking since it would take some time for new data to reach cache.

    Larger hot-files could benefit from the cache as they could be read from both flash and disk at the same time with the speed of both combined (80-140 + 40-60).

    With a custom driver (or a mini-USB port?) you could also have a tool-box to allow advanced users to manipulate the cache. Filtering by filetype, file size, heat (frequency of access, both short-term and long-term), location, etc.

    The failure of a cache of this kind is not (or does not have to be) fatal for stored data or even function of the HDD.


    Any thoughts people?
    Does this sound doable, and if so, to what extent?
    What kind of drives would benefit the most and be acceptable with a bit higher price and performance?
    Would HDD makers consider this or do this if it was doable and there was a market?
    (I think it's doable at least with simple LBA access frequency logging, and there being a market in almost all segments, however niche in some)
    get this is you don't want to get SSD

    http://translate.google.com/translat...irasawa002.htm

    http://www.geekstuff4u.com/3-5-diy-ssd.html

    http://www.techpowerup.com/75490/Sha...S_DIY_SSD.html

    Testing with HD-Tune, six 8GB SDHC memory cards with Class 6 speed ratings had a read speed of 140 MB/s and a write speed of 115 MB/s.
    Intel X-25E SSDSA2SH032G1 (32GB,64GB) is Read 250MB/s, Write 170MB/s



  17. #42
    Xtreme Cruncher
    Join Date
    Oct 2008
    Location
    Chicago, IL
    Posts
    840
    I just read the review on Tom's Hardware about the Seagate Hybrid drive. http://www.tomshardware.com/reviews/...-ssd,2638.html

    Overall, I am not particularly impressed. I still think that my original assessment that determine what to cache, when to cache it, etc makes hybrid drives not a significant player. There's a few benchmarks that are worse (like seek times, wtf?!) and others that are slightly better. I also think that benchmarks are going to prove completely useless for a design such as this. You have RAM cache, you have disk cache, and now you have the flash cache? If you are on a RAID controller you have the controller's cache too. How many "levels" do we want to have? Quite frankly, I'm sure there's duplicates between all of those cache levels, and somewhere someone is going to realize that we have diminishing returns on cache. CPUs have L1, L2, and some L3 cache. I have yet to see a review of L4 cache. Why? Because diminishing returns start playing a larger factor and the cost does not result in an appreciable performance benefit.

    But, who am I to argue with it? If anyone thinks these are a great buy, feel free to buy one. I hope they provide the performance you want, and the reliability you hope for. I'll save my money for the real SSD.

  18. #43
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    If you have the money and the possibility to run SSD for OS + apps, and HDD for the rest, that's a clearly better sollution.
    The Toms Hardware review focuses on synthetic testing with big testfiles, so i'm not surprised there wasn't much performance gains to be meassured.
    If you check Anand's review, he found it was faster than a velociraptor in many real world tests, like opening apps and booting.


    @onex: I'm not saying seagates engineers have made a mistake, or that they should fix something. I'm talking about an alternative approach, where you have a larger and less aggressive cache. If you set it up to be not so aggressive, the lifetime of MLC should not be an issue.
    Maybe Seagate found that a smaller aggressive cache worked better for their targeted market segment, i don't know. I suspect strongly a bigger less aggressive cache is more benificial as the only drive in a laptop.
    The flash controller would only have to be 1 channel, the two dies can share a channel in serial configuration, and since you can get two dies in one package, the physical footprint is the same.


    @serpentarius: I've seen that sollution before, over a year ago. The performance is worse than the lowest end Indilinx Barefoot drives, or JMF612. Modern SSDs easily do 200MB/s read, 100MB/s+ write, 5000-30000IOPS (20-120MB/s 4KB random, that's 4-24 times higher than the numbers you link to). The cost of the drive + SDHC cards also becomes higher than MLC drives pr GB.
    It's a fun product, but it's not good for performance.

  19. #44
    Xtreme Cruncher
    Join Date
    Oct 2008
    Location
    Chicago, IL
    Posts
    840
    I have that product. It sucks, bad. I tried to install Windows XP Home. 2 hours into the installation I turned the laptop off. Later I did decide to install windows just to see how slow it would be. It had 2-3 minute boot times with no other software installed. Overall, it would make a good drive to put games on or as a backup drive, but little else. Definitely not as a boot drive. I wouldn't recommend anyone purchase that drive because the cost just doesn't justify it.

  20. #45
    Registered User
    Join Date
    Apr 2010
    Posts
    23
    I'd like to see 3,5" with 25nm MLC ONFI 2.2 NAND, around 8-16GB of flash for added 20-40$
    I've been trying to get more information to the ONFI Specs for some time now, especially regarding the performance. I found whitepapers, but is there some short list where I can read more about the specs regarding performance of the different ONFI versions?


    OT:

    4GB-8GB Cache is enough IMHO. If you check your file sizes: on 4GB, you can get every file up to 256kb on a desktop windows installation with many installed programs. The performance boost caching bigger files than that should be minimal.

  21. #46
    Banned
    Join Date
    May 2009
    Posts
    676
    I've been trying to get more information to the ONFI Specs for some time now, especially regarding the performance. I found whitepapers, but is there some short list where I can read more about the specs regarding performance of the different ONFI versions?
    you can try they're site.

    @onex: I'm not saying seagates engineers have made a mistake, or that they should fix something. I'm talking about an alternative approach, where you have a larger and less aggressive cache. If you set it up to be not so aggressive, the lifetime of MLC should not be an issue.
    16GB of cache sounds a bit too big, thought it could take some of the work of it,
    remember, read cache could have a lot of writes, MLC NAND could be problematic.
    RAID controllers are using DRAM memory, due to speed probably and could be from degradation concerns,
    even though it has to use a battery backup unit which is pricey to keep the data secure.

    and since you can get two dies in one package, the physical footprint is the same.
    16GB should be 2x8GB packages.

  22. #47
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    RAID controllers only need BBU if you are using the RAM cache for writes. If you use it as read-only (for hotfiles and read-ahead) you don't need the BBU.

    Here is some INFO on ONFI (got you confused there? ) http://onfi.org/wp-content/uploads/2...bottleneck.pdf
    Since these are the most recent and accurate numbers i have on ONFI 2.x performance, i've been using them. Pleese give me references to newer numbers if you stumble upon some.


    Advantages of using RAM over flash for caching:
    -unlimited write durability allows use as write-cache, provided a BBU
    -lower latency (up to 100x+ lower)
    -higher bandwidth (a single DDR2 800mhz DIMM can do 6,4GB/s, close to saturating PCIe 2.0 x16)

    Advantages of using NAND flash for caching:
    -way lower cost
    -much higher capacity density
    -lower power
    -"good enough" read latency to remove/elleviate IO-bottleneck (~50µs access, 5000-10000 IOPS), and enough read bandwidth to saturate SATA/SAS interfaces with a handfull of dies (80-120MB/s pr die).
    -non-volatile, data stays in cache whitout power

    Short summary: If you need write caching, RAM is your friend. If you need high density and can make due with read-only and use a SATA/SAS interface, NAND provides value.

    @Eggcake, i'm well aware of the ammount of small files on a typical windows installation, if you skim through my rants on the first page, you'll se i've discussed it. 4GB should hold most small files on a typical computer, however, you need to update the cache when the files are overwritten.
    On a typical computer, you'll have a good amount of over-writes of small files, wich would lead to a lot of writes to the cache. By making the cache 2-3 sizes of the small files, you can have multiple versions (all but 1 outdated) of the same hot files before you need to flush the old invalid ones. If you have 4GB of files below 128Kb, and 10% of them are overwritten every week (400MB writes pr week), where 10% of those again being overwritten 10 times every day (+400MB writes pr day), the 12GB of extra space will give you a month before you need to flush invalid files, at wich point you will have large blocks you can clean effectively.
    You could also have the drive run boot-traces, like the drive logging all LBAs read the minute after it's turned on. After a few times, you will have some LBAs that are read most times, and the order they are read. By arranging it sequentially in the flash cache, you could get 150MB/s+ during start-up for common files.

  23. #48
    Banned
    Join Date
    May 2009
    Posts
    676
    RAID controllers only need BBU if you are using the RAM cache for writes. If you use it as read-only (for hotfiles and read-ahead) you don't need the BBU.
    you are right at this one (overlooked it),
    nice.

    Pleese give me references to newer numbers if you stumble upon some.
    already gave you this one ,
    it should be enough for now,
    it seems to be the last document currently.

    Micron has some fabulous documents and technical notes, worth taking a look at .

    IM joint venture 25nm NAND should take die sizes even less then the current (read somewhere about the actual sizes, yet forgot it),
    34nm to 25nm move is not a full node, so it doesn't mean that a 4GB die would get to be 8GB (die, not package), unless they maybe re-size it,
    34nm full node move should be 22nm.

    apparently, an Intel X25-M 80GB drive has only 10 chips for 80GB capacity, so they should be 8GB each,
    strange enough, there are no spare chips for spare area, or rather intel's drive doesn't support that (or rather they are placed at the other side of the PCB (which is not shown through the pictures ).

    the LE (100GB) apparently has 16 packages (each for E:8 (not 16)GB) for a total of 128GB.
    pictures.

    The OCZ Vertex 2 Pro features Intel 29F64G08CAMDB MLC flash memory. In total there are 16 chips and each IC is 8GB in density. That adds up to 128GB of storage space, but only 93.1GB of it will be usable space!
    Last edited by onex; 05-25-2010 at 11:00 AM.

  24. #49
    Registered User
    Join Date
    Apr 2010
    Posts
    23
    Afaik it is 80GiB, you only got 74.5GiB formatted space. So in the end you got about 9% spare are on intel drives.

  25. #50
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    correct eggcake, the spare area is not taken from entire chips (dies), but portions of them reserved.

    Intel x25-M 80GB uses 10 channels, with 10 TSOPs (packages), the number of dies may be 2 pr package, meaning 4GB dies, or one die pr package at 8GB.
    The x25-M 160GB uses 10 channels with 20 TSOPs, half on each side of the PCB.

Page 2 of 3 FirstFirst 123 LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •