Page 1 of 3 123 LastLast
Results 1 to 25 of 59

Thread: Thought experiment: HDD with flash read-cache?

  1. #1
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513

    Lightbulb Thought experiment: HDD with flash read-cache?

    So, lately i've been skimming through a few old articles and papers on HDD/RAM and newer Flash/RAM hybrid drives. After looking at the specs and contemplating architecture a bit, i was left with a question:
    Why hasn't any HDD maker yet added a flash read-cache?

    By adding a single 4-8GB MLC NAND chip, costing roughly $2-3/GB = $8-24 added cost, and using it for read-caching hot-files, you can get around 4-5000 4KB random read IOPS = 16-20MB/s (@QD 1) and roughly 40-60MB/s sequential read for the cached data. And since it's a read-cache, write speeds and write cycles will be largely irrelevant.

    Tracking hot-files should be easy to implement simply by logging read-access to LBAs, and with a slight bit more effort, filtering LBAs being read in a small block random pattern. Possibly also caching file-table and folder/file structure and metadata, as well as the data typically read the first seconds after power-up or spin-up. This could allow low-power "green" drives to not spin up everytime you access it if you don't need un-cached data, and could also allow the drive to spin at a lower speed while delivering adequite performance. Lower rotational speed could also allow higher storage density.

    Using this type of caching would have a noticable effect on typical usage patterns, and especially multi-tasking, but would likely not make an impact on benchmarking since it would take some time for new data to reach cache.

    Larger hot-files could benefit from the cache as they could be read from both flash and disk at the same time with the speed of both combined (80-140 + 40-60).

    With a custom driver (or a mini-USB port?) you could also have a tool-box to allow advanced users to manipulate the cache. Filtering by filetype, file size, heat (frequency of access, both short-term and long-term), location, etc.

    The failure of a cache of this kind is not (or does not have to be) fatal for stored data or even function of the HDD.


    Any thoughts people?
    Does this sound doable, and if so, to what extent?
    What kind of drives would benefit the most and be acceptable with a bit higher price and performance?
    Would HDD makers consider this or do this if it was doable and there was a market?
    (I think it's doable at least with simple LBA access frequency logging, and there being a market in almost all segments, however niche in some)

  2. #2
    Crunching For The Points! NKrader's Avatar
    Join Date
    Dec 2005
    Location
    Renton WA, USA
    Posts
    2,891
    becuase they want you to buy ssd drives for rediculos ammounts of money instead>? becuase people dont want in between speeeds they either want normal/bleeding edge of technology?

  3. #3
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    As an interim measure this would be a good viable option. Mixing technologies as you propose should be able to give SSD performance, larger capacities and cheaper prices, however as SSD’s become more mainstream and prices drop to HDD levels and capacities increase the value proposition would quickly disappear. I guess the question is how long would it be a viable option and how much would it cost to develop?

    You would have thought that Western Digital would have considered this, but then again it’s amazing how big companies are disjointed when it comes to combining technologies. (Raid trim support and Intel come to mind quite quickly).

  4. #4
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    @NKrader: That makes sense for two harddrive makers; Samsung and Western Digital.
    None of the other HDD markers have SSD for the consumer market.

    My point is it would be a great value-add that would own competing HDDs, and would allow "green" drives consuming even less power while at the same time boosting performance for general usage.

    If the implementation is made in the easiest way i've mentioned above, it won't take much R&D either, and since using this kind of cache is (or can be made) none-fatal to the drive if it fails, there is very little risk of angry customers.

  5. #5
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Samsung had a go at it a few years ago
    The Samsung Hybrid drive, Link to extremetech, Link to PCWorld
    -
    Hardware:

  6. #6
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    I see i partially answered you there audienceofone whitout seing your post :P
    What Intel did to it's G1 customers can be simplified as burning them, you don't get away with that many times before people start to boycott you...


    Regarding the timescope in wich this type of cache/hybrid sollution (not really a hybrid though with a single chip, a hybrid would be more like 4 chips with NCQ for 20K IOPS read and 160-240MB/s seq read), it will depend on what segment it's done in.
    It can have a 5 year lifetime in "green" drives as a power-saving meassure while at the same time increasing average random IOPS. (3-5 generations/versions of drives)
    For "black" drives it will likely have a 2-3 year lifespan for higher capacity drives as a IOPS boost for capacities that are too expencive as pure SSDs. (2-3 generations/versions)
    Used as a f.ex. a more real SSD-HDD hybrid with 4 NAND chips in parallell with NCQ it could have a lifetime of 2-3 years as well. (2-3 generations/versions) Such a device could also do random write caching, and/or burst write boosting, as well as seq read boosting of large hot files.


    EDIT: @anvil, if i remember correctly, it used only a small amount of flash, and required OS support, wich was not ready for it. What i'm talking about here are "true" cache/hybrid drives that are "plug n play" under any OS.
    Last edited by GullLars; 04-07-2010 at 11:01 AM.

  7. #7
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    They had plans but I don't think it worked out the way they thought.

    "Andy Higginbotham, director of sales and marketing for Samsung's Hard Disk Drive Group, notes that NAND flash's cost per gigabyte is expected to decrease by about 50 percent per year. Considering that timeline, Higginbotham says, we can expect to see hybrid hard drives incorporate 512MB of flash in 2008, 1GB of flash in 2009, and 2GB in 2010--all for the same cost as integrating 256MB today."

    It might still happen but I have doubts.
    I'd say, HDD's for storage and SSD's for speed , there may be a market in between.

    It's a great idea though, this way notebooks/small laptops and low-mid end computers could have the "best of both worlds".
    -
    Hardware:

  8. #8
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    I remember reading on storagesearch.com a couple of years ago that that samsung HDD-flash hybrid drive depended on Vista for usage of the flash component, and that it was poorly implemented. The tiny size of 256MB wouldn't do much good used in the way those reviews you linked seem to indicate.

    A 4-8GB flash cache on the other hand is enough to cache all files on a W7 installation below 1MB, in addition to some hot files.

    If i were to set the caching parameters by the simplest form of LBA read-access tracking filtering wich can be done entirely onboard the disk, i would set it to cache anything read at 4-16KB chunks in a random (or single sequential) access pattern, and then go on to caching the hottest LBA blocks in the 32-128KB range. And after that, cache anything read within 10-20 seconds of every (or most) power-on that would still fit in the rest of the cache.

    EDIT: the added ca $20 for 8GB MLC flash would easily be justified by the added performance in 1-2TB 3,5" HDDs and 500GB+ 2,5" HDDs used for OS, programs, and games.
    Last edited by GullLars; 04-07-2010 at 11:33 AM.

  9. #9
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Have you come across this before? http://www.dataslide.com/technology.html

    I don’t know much about it but the concept seemed quite cool. Performance is not so bad either.

    Hard Rectangular Drive
    160,000 IOPS & 500MB/sec and low power <4 Watts for a magnetic storage device:
    1. A piezoelectric actuator keeps the rectangular media in precise motion
    2. A diamond solid lubricant coating protects the surfaces for years of worry free service
    3. A massively parallel 2D array of magnetic heads reads from or writes to up to 64 embedded
    heads at a time

  10. #10
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    I've seen it before. It would be cool if it worked, but if it worked, there would likely be more fuzz about it and prototypes demonstrated.
    160.000 IOPS, it does not say random or sequential, or at wich block size. It could be sequential 512B, in wich case 80MB/s isn't that impressive.

  11. #11
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    It looks like they are trying to secure Series A venture funding, so unlikely they will appear on the street anytime soon. I’m going to dig around to see what I can find out.
    On the technical page they compare 35,000 RD/3,300 WR (SSD) with 160,000 RD/ WR (Hard Rectangular drive) so I’m assuming they are talking about 4k random read/writes. Insane.

  12. #12
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    there are raid card companies (adaptec and LSI) that are already doing a form of this hybrid, but with arrays. basically they use a single ssd as cache for a HDD array, and the raid controller has intelligent caching that allows for 50x improvement in iops from that array. very smart.

    it is called CacheCade from lsi or MaxIQ from adaptec. if they are already doing it for arrays i imagine that it wont be long befor ethey implement it into a single unit.(disk)
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  13. #13
    I am Xtreme
    Join Date
    Dec 2008
    Location
    France
    Posts
    9,060
    Quote Originally Posted by Computurd View Post
    there are raid card companies (adaptec and LSI) that are already doing a form of this hybrid, but with arrays. basically they use a single ssd as cache for a HDD array, and the raid controller has intelligent caching that allows for 50x improvement in iops from that array. very smart.

    it is called CacheCade from lsi or MaxIQ from adaptec. if they are already doing it for arrays i imagine that it wont be long befor ethey implement it into a single unit.(disk)

    Or like Silverstone HDDBoost...
    Donate to XS forums
    Quote Originally Posted by jayhall0315 View Post
    If you are really extreme, you never let informed facts or the scientific method hold you back from your journey to the wrong answer.

  14. #14
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    That is exactly what i meant computurd, but my point was doing it for SINGLE DISK. To reach lower-cost consumer markets.
    When you do it for arrays, it's called ASAP (Auto-tuning SSD Accelerated Pools of storage). These have been around for over a year now in Rack-mount, and Adaptec started for single-card sollutions this fall.
    The problem with Adaptec's method is that they charge like $1000 for a 32GB x25-E for the read-cache, whitout doing anything other than adding a tag to the firmware to show it's their proprietary design. If they would allow using any MLC SSD, it would be a good value-proposition for consumers also, for the time being though, it's only feasable for high-end workstations and low-end servers.
    I would consider using a 32GB Barefoot drive as read-cache for a RAID 5/6 from my Adaptec 5805 if it were possible, but i won't even consider buying a x25-E for 2-3x market price.

    If you can implement this type of ASAP in a single drive, it would look like a self-tuning read-cache, and making it with a single NAND chip and making it work internally as i have described with read-access logging and filtering could boost average small block random IOPS by 5-10X for general usage.

    EDIT: the term ASAP was coined by storagesearch.com. I just remembered an article they had this fall with a 5-year prediction of the storage market. I thought you would find it interresting, but didn't know where to post it, so i just pust it here: http://www.storagesearch.com/5year-2009.html
    Last edited by GullLars; 04-08-2010 at 03:06 AM.

  15. #15
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    interesting article, good reading!
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  16. #16
    Banned
    Join Date
    May 2009
    Posts
    676
    Quote Originally Posted by GullLars View Post
    That is exactly what i meant computurd, but my point was doing it for SINGLE DISK. To reach lower-cost consumer markets.
    When you do it for arrays, it's called ASAP (Auto-tuning SSD Accelerated Pools of storage). These have been around for over a year now in Rack-mount, and Adaptec started for single-card sollutions this fall.
    The problem with Adaptec's method is that they charge like $1000 for a 32GB x25-E for the read-cache, whitout doing anything other than adding a tag to the firmware to show it's their proprietary design. If they would allow using any MLC SSD, it would be a good value-proposition for consumers also, for the time being though, it's only feasable for high-end workstations and low-end servers.
    I would consider using a 32GB Barefoot drive as read-cache for a RAID 5/6 from my Adaptec 5805 if it were possible, but i won't even consider buying a x25-E for 2-3x market price.

    If you can implement this type of ASAP in a single drive, it would look like a self-tuning read-cache, and making it with a single NAND chip and making it work internally as i have described with read-access logging and filtering could boost average small block random IOPS by 5-10X for general usage.

    EDIT: the term ASAP was coined by storagesearch.com. I just remembered an article they had this fall with a 5-year prediction of the storage market. I thought you would find it interresting, but didn't know where to post it, so i just pust it here: http://www.storagesearch.com/5year-2009.html
    Gullars -
    can u say why u'r so occupied with that speeding issue?
    why do u need SUCH speedy drives and what are u going to use them for which demand such speeds?
    i find it hard to understand what people are doing with a 1900MBps sequential disk array as Tiltervos, or mbreslin using something like this as well,
    steve is using his array for benching, which is understandable, yet why does any "regular" - non enterprise user needs SUCH throughput is something which is quite misunderstood.

    p.s - the HRD is probably tested on 4KB blocks,
    usually drives are being benched @ this scale random,
    it's a kind of a convention - standard.

  17. #17
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    I know, the consensus on using IOPS whitout any other marking is 4KB random. However there are some cases where people deviate from this whitout specifying, and when i see numbers that seems a bit extreme, i always want to make sure.

    The speed i'm talking about in this thought experiment is simply accelerating IOPS from average harddrives, and possibly also a bit bandwith bump.
    It will be like making a harddisk perform like a low-end SSD for cache-hits, wich would have a high percentage with a 4-8GB cache for average usage. And with only a $10-20 price increase.

    Another point i'm making is completely unrelated to performance. Using such a cache would significantly reduce head-thrashing from random reads, and thereby reduce power consumption, heat generation, noise levels, and wear on the drive. You could also get the same real-life performance from a slower spinning drive and slower reacting read/write-head, requireing less power and lowering production costs.
    Using a bit more sophisticated caching function (with custom driver), you could let the drive spin down completely more often, or possibly have a lower spin state (f.ex. 1000RPM) and have the file structure and metadata in the cache so the drive only would need to spin up for accessing un-cached files (or large files in the case of 1000RPM slow-spin state).
    Either type of caching could be justified price-wise simply from power-savings and extended life. The added performance wouldn't hurt either :P


    With regards to 1000MB/s+ bandwidth, most people won't need it. Those who need it will be power-users in their workstations. F.ex. VMware, databases, working on large volumes of files, servers, etc.
    Most people would hit the point of diminishing returns around 500MB/s bandwith (both read and write) and 50K IOPS. This can be done with 2-3 SSDs in RAID-0 from ICH10R.

  18. #18
    Banned
    Join Date
    May 2009
    Posts
    676
    well, it sounds reasonable, i'm just wondering whether manufacturers havn't thought yet about this solution, as u can seemingly increase an HDD performance with a caching function.
    raid cards has been using it, Intel has suggested it with it's braidwood technology, so it only raises the question, why does HDD manufacturers indeed havn't been using it yet, and especially for the server market, where drive longevity or MTBF is much more important.

    E:
    i've been going over the thread ,
    it seems, as mr. anvil has showed, they had thought about it,
    it could be that high NAND prices has sabotaged this direction in the past, and manufacturers desire for a wider SSD market adoption, has lay this idea into a hold.
    from that perspective, generally, adding flash chips to HDD architecture should lower flash prices as more of it would be produced, what would eventually enable an SSD prices drop and streaming it more and more into mainstream usage.

    i'm not sure they won't be using it though, as flash prices do get down, and there's a constant ungapping which needs to be maintained between HDD market and the SSD one,
    both from a capacity aspect and customers awareness to it,
    it seems only as the companies interest to allow such option.
    with global worming and cutting down electricity bills and power usage,
    such an hybrid function sounds like the thing that might bridge between these two ends and bring about an intermittent solution.

    maybe we should talk with them?
    what do u say?

    E2:
    have a look at this, it's basically the same idea
    seagate hybrid drive

    E3:
    u may want to read about some of it's drawbacks here:
    wiki - hybrid drive drawbacks.
    just for noting it,
    i don't think manufacturers aren't thinking of these solutions, yet as we all probably know, technological implementation is never as simple and strait forward as it seems on paper,
    this is a great deal of part of the research, this is where we're learning of our misunderstandings and thinking mistakes,
    but maybe we should indeed forward these questions to a HDD manufacturer, and see where the difficulties are,
    for us users, it is relatively easy bringing up idea's when they cannot be all-the-time accepted,
    the main difference is we usually are never aware at all the deficits,
    this is really the breaking point where we would HAVE to discuss this with someone who is further aware of it.
    Last edited by onex; 04-16-2010 at 07:20 AM.

  19. #19
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    I see you say you have read (or skimmed) the thread, but you still bring up points already addressed.

    To adress E2, it's basically the same as Samsungs hybrid drive. It uses a small (<1GB) flash cache, and relies on the OS to make use of it. This is NOT the type of caching i'm talking about here.
    To explicitly underline it again, the sollution i'm talking about is a rather large cache (0,5-1%+ of drive capacity, 4GB+) and managed internally by the harddrive controller as a read-cache whitout the OS ever knowing it's anything other than a normal HDD. This ensures it's completely compatible with any OS and computer supporting a standard SATA HDD.


    To address the counterpoints in E3:
    First off, what's addressed in this thread is a HDD with a flash read-cache, not a hybrid drive. You could call it that, but it would identify itself as a normal HDD and act as one, just with ~5K IOPS from cache hits.

    To take it point by point:
    Lower performance (access time) for non-cached data
    If the data being accessed is not in the cache and the drive has spun down, access time will be greatly increased since the platters will need to spin-up.
    Sure, cache miss = HDD speed, cache hit = low-end SSD speed. Hitrate for files <1MB would likely be ~80-90%, <128KB closer to 95-99%. Overall speed = faster than 15K SAS for average usage and really "snappy" if the usage pattern has a lot of hotfiles.

    Lower performance for small disk writes
    Flash memory is significantly slower for writing small data, an effect that is amplified by use of journaling file systems.[9]
    Could be true for a hybrid drive, but not for this type of read-cache only use. Random writes would be equal to a HDD.

    Increased cost
    Hybrid hard drives are currently more expensive than their non-hybrid counterparts, because of the higher cost of flash memory.[9]
    At roughly $2,5-3 pr GB for fast flash, a 8GB cache would cost about $20, wich would easily be justified for high-performance drives from the added IOPS, and for large capacity (1TB+) "green" drives from power savings combined with fast hot-file access.

    Reduced lifetime
    A hard drive, once spinning, suffers almost no wear. A significant proportion of wear arises during the spin-up and spin-down processes.[8] Indeed, the number of spin-ups is often given as the indication of the lifetime of a hard drive.
    Flash memory allows far fewer write cycles than a hard disk
    Spin-up and spin-down COULD be an issue if it's very aggressive. For high-performance drives with the kind of cache in question here, spin-down would not be an issue as it wouldn't happen. For "green" drives, it could be an issue, but not more than normal "green" drives if the spin-down parameters are the same.
    As for flash memory and write cycles, a read-cache would not likely be completely overwritten 5000-10000 times in a drives lifecycle.

    Increased perceived noise production
    A hybrid hard drive, spinning up and down, may make drive noise more noticeable, especially with varying usage conditions (i.e., fans and hard drive spinning up on usage).
    Like i said for the point above, high performance drives wouldn't spin down, but get noise reduction from reduced head-thrashing. Green drives would not spin up and down more than normal green drives, but could be made more silent by using a slower head servo and still get the same average (or even higher) IOPS performance.

    Increased power usage with spin-ups
    A hybrid drive requires spin-up and spin-down more often than a normal hard drive, which is often spinning constantly. Hard drives also draw more power during spin-up.[8][11]
    Same as the two points above. Not appliccable for high-performance drives, not more of an issue for green versions than normal green versions. Pluss, the max wattage to a single flash chip is below 500µW, or half a watt, average power draw while reading is in the area of 100µW.

    Lower recoverability
    Hybrid hard drives, based on storage to both a flash component and a hard drive component, and bound to use by a specific OS (i.e., Windows Vista), cannot be recovered after failure using classic hard drive recovery services.
    An issue for hybrid drives that require OS support, the sollution in question here is done in-drive and os-independent. Since the flash is read-only cache, there is no more danger of dataloss than a classic hard drive, if anything, a better chance of recovering data in the case of a head-crash since hotfiles would be in the flash and would survive.


    I'm in for taking this up with manufacturers, it's just a question who to contact, and how. If anyone on the forum know someone working for a HDD manufacturer, and could get them in here to brainstorm a bit and deliver some respons to the issues we raise, while not advertising their brand or other forum-illegal activities, it would be nice.


    EDIT: I'm sure manufacturers, or their storage labrats think of hybrid drives from time to time, or the possibility of flash as a cache, but do not put it past "crazy enthusiasts" to come up with unusual but interresting stuff they haven't thought about.
    Last edited by GullLars; 04-16-2010 at 10:45 AM.

  20. #20
    Banned
    Join Date
    May 2009
    Posts
    676
    o.k,
    i'm a bit hard with this, i'm still don't have a full grip of the technology,
    i skimmed over the thread again, trying to see where i did come over already addressed points, and couldn't find any ,
    i was indeed just trying to strengthen some points, and 'calm' others,
    i was trying to oppose anvil's skeptical saying of whether they'll be doing it or not, and maybe i should've quoted it to direct u, but i thought it can generally be understood.
    i think it is an option, and that's why i'm saying we should ask a manufacturer of whether this is viable,
    yet i'm always coming back to this wall, which is saying, we can't really know..., unless we talk to someone, unless we ask them,
    and we'll go circling ourselfs as long as we don't speak to one, so this seems a bit meaningless, if u understand the point.
    i tend to think manufacturers are quite aware at strains which are usually unseen by customers,
    that's about it, yet it is interesting and surely worth to take the shot.
    Last edited by onex; 04-16-2010 at 03:13 PM.

  21. #21
    Banned
    Join Date
    May 2009
    Posts
    676
    the only thing i'm missing here is the spin down issue & power consumption,
    if there are many hot-files in he workload, there isn't seem to be any reason for the drive to be spinning,
    constant spin equals more power draw.

    a drive as such, should be much less bound to failure, if u'r referring a non-spinning-down situation, then u seem to be referring to a drive that relies on cache-hits when the heads are being used just when data isn't written already inside the cache.
    another point to notice, is the fact that system which needs hi-capacity drives, usually needs them to use the entire space of the drive, which means a great deal of random seeks where the cache counts for less effectiveness.
    for a small drive -let's say- 128GB, to have a 4-8GB read cache sounds more reasonable from that perspective,
    also, it depends on the work habits of that server or setup, maybe a file server which needs files from different sectors constantly, won't benefit any of this,
    and actually, what we're talking about here IS hi-capacity drive dedicated for such setups.

    or either u talk about hot-files as system files which are being access repeatedly, so what we're talking about, is a sort of a OS-enhancing disk.

  22. #22
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    The idea was the simplest implementation is read-caching of hotfiles, ie the most read LBAs on the drive, pluss possibly an algorithm/filter that prioritize LBAs read alone, or in only small clusters at a time (say 4-64KB). This would benefit any workload that has hot files, or hot parts of files, since would simply cache the most requested data blocks of <64KB. A drive used for serving purely video content would likely not get noticable performance boost, but f.ex. a drive serving both video and other mixed files wich are read more frequently would benefit for those. If the usage was OS or app disk, a hot-file (hot LBAs, single to 16 adjacent) caching system of 8GB capacity would likely give a good average hitrate for random small block reads, at least after some time of use.

  23. #23
    Banned
    Join Date
    May 2009
    Posts
    676
    do you have any idea of work load usage patterns in regard to file size and/or preferred block size for a raid array of different setups?

    E:
    found something on AT,
    light workload -
    The light workload is composed of 37,501 reads and 20,268 writes. Over 30% of the IOs are 4KB, 11% are 16KB, 22% are 32KB and approximately 13% are 64KB in size. Less than 30% of the operations are absolutely sequential in nature. Average queue depth is 6.09 IOs.
    heavy workload -
    consists of 128,895 read operations and 72,411 write operations. Roughly 44% of all IOs were sequential. Approximately 30% of all accesses were 4KB in size, 12% were 16KB in size, 14% were 32KB and 20% were 64KB. Average queue depth was 3.59.
    gaming workload -
    gaming workload is made up of 75,206 read operations and only 4,592 write operations. Only 20% of the accesses are 4KB in size, nearly 40% are 64KB and 20% are 32KB. A whopping 69% of the IOs are sequential, meaning this is predominantly a sequential read benchmark. The average queue depth is 7.76 IOs.
    i've been saying to kevikev member here not long ago that usual QD won't normally pass 10, so it seems right.

    now we might be able to get an idea on typical server usage, sadly AT ain't doing any server benchmark with pre-defined work-load for every setup as it does with mainstream patterns,
    we'll need to find out the % of hot-files usage in different dedicated machines in order to be able to estimate such "hybrid" feature potential.

    iv'e been asking anand few weeks back on the software he uses for he's benches, yet he says it's propriety and NDA'd.
    we came up with Xperf from WPT (win performance tool) as a testing tool that might bring out some profound information on disk usage, it's the only one i found, yet havn't had the chance to test it.
    darn, i wish i could grab that one he uses, a specialized software that should be especially designed for drive performance analysis.
    so much crap in the markets and un-professional tools ,
    it seems they don't give independent research any chance.
    Last edited by onex; 04-17-2010 at 02:02 PM.

  24. #24
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    It's a read-cache i'm discussing in this thread, not a hybrid-drive, that would be an entirely other thing.

    As i've stated a few times, the easiest way to implement, and the way i would expect HDD manufacturers to do it, is just adding a single NAND chip and cache the most read LBAs (both over time, and recent) there. It can be done entirely in the drive whitout any difference showing to the OS. Many usage patterns will benefit from such a read-cache, and especially use for OS, programs, or games.

  25. #25
    Banned
    Join Date
    May 2009
    Posts
    676
    yeah, i mean hybrid, as HDD&SSD/flash conjunction.
    it's pretty much hybrid ,
    and afaik, usually, hybrid comes as a synonym for joint between two types of technology, or might even imply only a power saving feature, as it has a "green" name.

Page 1 of 3 123 LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •