Page 2 of 4 FirstFirst 1234 LastLast
Results 26 to 50 of 76

Thread: FancyCache - software RAM based disk cache

  1. #26
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by Ao1 View Post
    I just got a blue screen - system file integrity check repair failed - error code 0x490. I had to uninstall FancyCache in safe mode before I could boot normally.
    I had an unexpected shutdown on my laptop, it did restart without any issues though.
    This is a beta, one never knows.

    Quote Originally Posted by lowfat View Post
    4k. 100% write. 100% random. QD1. Results after 5 minutes.
    Interesting results, I'll give it a go with some iometer tests tomorrow.

    This is a more sobering result using one of my bench-pc's Ci7-920@4 with one of the low spec SSDs, the WD N1x

    as-ssd-bench WDC SSC-D0064SC- 3.10.2011 11-49-31 PM.png
    -
    Hardware:

  2. #27
    I am Xtreme
    Join Date
    Oct 2005
    Location
    Grande Prairie, AB, CAN
    Posts
    6,140
    This is on a 1TB 7200.12

  3. #28
    ...
    Join Date
    Nov 2001
    Location
    Wichita, KS
    Posts
    4,598
    Quote Originally Posted by Anvil View Post
    It might be of great use, especially if your storage is slow.
    It might even save some battery on your laptop.
    might be? is there a way to find out for sure?
    i'd be tempted to try it out, but i can max out my memory pretty easily so by my understanding it won't help in the slightest, might actually hamper performance.

  4. #29
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    The only way of finding out is by testing, I don't believe there is a single answer to that question.

    It would depend on a lot of factors like the typical filesize, the amount of memory for the cache, read or read+write caching, ...
    If you've got a lot of small files, a small cache might improve performance a lot.

    In the end you may find that your memory is better put into use by the applications than the cache but you won't know unless you do a test
    I'd start off with a small cache e.g. 256-512MB.

    And do keep in mind that this is a beta, I've had one unexpected restart and so did Ao1, it is not recommended for anything but testing.
    -
    Hardware:

  5. #30
    Xtreme Member
    Join Date
    Jun 2005
    Location
    United Kingdom of Great Britain and Northern Ireland
    Posts
    464
    It simply must be of some use. It depends on what you are doing how much use that is. If for example you're playing a game, then you're only going to be accessing a subset of data on your drives, the cache must help with that.

    However Windows already does at least some of this by default. Windows default behaviour is to try to use as much memory as possible. By doing this it hopes to minimise disk access. Those of us who have noticed Windows using up lots of memory...well yes and no. It's quick to release the memory when it needs to, but by design it'll fill it all up if it can.

    What's unclear is what's better... FancyCache or Windows. Previously people running RAMdisks with swapfile in the RAMdisk found better performance - at least on XP - so Windows doesn't get it right all the time. What's needed, I guess, would be some kind of benchmarking built into a game which would measure level load times in MB/sec and access times etc. Perhaps with that and a bit of time it could be judged if it was better to take 2GB away from Windows and give it to FancyCache or not.

    Edit : Of course I focused on read caching... for write caching it's much simpler in that it has to help speed things up. It's always going to be faster to write 500MB of scattered data to memory and have it spit onto disk over time. However again you have to bear in mind that you're taking the memory away from Windows, and if most of what you do is reads and very little writes.. then it may not be an overall benefit.
    Last edited by Halk; 03-10-2011 at 04:21 PM.

  6. #31
    ...
    Join Date
    Nov 2001
    Location
    Wichita, KS
    Posts
    4,598
    thank you very much for the detailed explanation, it is appreciated. i might give it a bit of a shot in my system soon. i've a pretty "dirty" OS right now and if it could cause a bsod i'd rather wait until i next re-install.

    thanks again!

  7. #32
    Xtreme Member
    Join Date
    Jun 2005
    Location
    United Kingdom of Great Britain and Northern Ireland
    Posts
    464
    Intel's new Sandybridge chipset - "Z" has support for SSD caching. SSD caching isn't what FancyCache is doing for us.

    SSD caching is caching mechanical hard drives with SSD.
    We're caching SSD with RAM.

    It's the same basic principle though. Tom's Hardware has an article on it today, and I've linked directly to the page with benchmarks of how things are boosted by the SSD cache of the mechanical drive. My summary of the results : No improvement on boot ups, app loading etc. Significant improvements on repeated tasks.

    The very same principles should pass across to FancyCache. However this is where the big problem comes in... You can see from Chris's article that some tasks are improved, some are not... but what tasks are improved? The ones where we're reading something on disk which we had already read before....which is exactly where having lots of memory to chuck at Windows would also help. So we may well be taking 2GB of memory away which Windows would do the same thing with, just to have FancyCache do it instead.

    Also worth noting is that as far as I can see the "level 2" cache in FancyCache isn't greatly different to what the Z series is offering... caching slow mechanicals with fast SSD. However the Z series seems to allow it on a boot volume (and offer no actual benefit) rather than having to wait till Windows loads to have it running.

    Overall... I'm less impressed with FancyCache than I was before.

    Edit : I've said "extra memory to chuck at Windows would also help" change that to "to chuck at Windows should also help" because I don't know how good Windows is at it (it's been doing it since at least XP so it should be at least decent...)
    Last edited by Halk; 03-10-2011 at 05:09 PM.

  8. #33
    Registered User
    Join Date
    Oct 2006
    Location
    Kirghudu, Cowjackingstan
    Posts
    462
    I wonder what is the real benefit of storage cache then? Does it mean 1880 cache is as useless as RAM based one for desktop use?

    Sony KDL40 // ASRock P67 Extreme4 1.40 // Core i5 2500K //
    G.Skill Ripjaws 1600 4x2Gb // HD6950 2GB // Intel Gigabit CT PCIe //
    M-Audio Delta 2496 // Crucial-M4 128Gb // Hitachi 2TB // TRUE-120 //
    Antec Quattro 850W // Antec 1200 // Win7 64 bit

  9. #34
    Xtreme Member
    Join Date
    Jun 2005
    Location
    United Kingdom of Great Britain and Northern Ireland
    Posts
    464
    It's got to depend on what you're doing. If you're repeatedly running Vantage benchmarks then it's great!

    Seriously though... the first thing anyone should do is have too much memory. There's no other way to increase performance that even comes close to making sure you have lots of spare memory.

    If we take it for granted that any of us enthusiasts, or those of us that need high performance for production, etc... already have a good system with plenty of memory... Then is 512MB of controller cache really going to help? What's that controller going to be caching anyway? What we've got in memory...?

    There's just too many variables and too many different concepts to fully understand. Does Windows 7 do a really really good job? If it doesn't then the door is open to caching. I don't think we've even got a way to reliably determine if cache has a beneficial effect...

    Not unless someone can design a benchmark that boots up, does a days work, and then shuts down.. then does exactly the same with cache. And then does exactly the same with cache and reduces main memory to compensate for cache... Anything else would be a synthetic test with obvious holes.

  10. #35
    Registered User
    Join Date
    Oct 2006
    Location
    Kirghudu, Cowjackingstan
    Posts
    462
    Good read by EMC's CTO on disk caching: http://chucksblog.emc.com/chucks_blo...ching-101.html
    Not necessarily workstation relevant, but interesting non the less...

    The distinction is important -- volatile cache can safely be used for reads, but (generally speaking) shouldn't be used for writes. When an application writes data and gets an acknowledgment, it assumes that the data is safely stored and can be re-accessed when needed.
    Ostensibly enough, a given hunk of data sitting in read cache is a huge performance win -- the *second* time you accessed it. Not surprisingly, it does absolutely nothing for you the *first* time you read it.
    Part of the "heatedness" of the debate has to do with enterprise flash drives. Today, they do an excellent job at random read profiles -- there's no disk heads and virtually no latency. And, of course, they can be written to. Physical disk drives do poorly with random read profiles, large read caches only marginally better.

    The exception, of course, is if you've got the bucks to create a ginormous read cache, and pull almost all the significant data into memory. Don't snicker -- there are a few use cases where this sort of approach makes sense.
    However, in the real world, this pure sustained sequential write pattern tends to be the exception, rather than the rule. Most write patterns tend to be both bursty and relatively random. And large write caches help with both.

    Write bursts (think database updates or busy file systems) tend to be easily soaked up by write cache. The application is essentially writing to memory, rather than storage media, and you see an eye-opening performance increase as a result.

    In addition, random write patterns can be "coalesced" into more sequential patterns that can be written to disk in a more optimal fashion, greatly improving the performance of the back end.
    If my workload was primarily re-reading the same data over and over again with infrequent updates (and these do exist), I would strongly consider a design that had cheap SATA and the potential for large read caches.

    If I was concerned about random read performance (much more common), I'd be far more interested in something that supported enterprise flash for part of the workload.

    And if I had a part of my workloads involved bursty updates to data, I'd seriously consider an array with non-volatile write cache.

    Surprisingly, most customers have a mix of all three -- which is reflected in the way EMC builds its storage array products.
    Last edited by F@32; 03-10-2011 at 08:30 PM.

    Sony KDL40 // ASRock P67 Extreme4 1.40 // Core i5 2500K //
    G.Skill Ripjaws 1600 4x2Gb // HD6950 2GB // Intel Gigabit CT PCIe //
    M-Audio Delta 2496 // Crucial-M4 128Gb // Hitachi 2TB // TRUE-120 //
    Antec Quattro 850W // Antec 1200 // Win7 64 bit

  11. #36
    Xtreme Member
    Join Date
    Jun 2005
    Location
    United Kingdom of Great Britain and Northern Ireland
    Posts
    464
    From that, reading between the lines I get that cache isn't that good. Which is where I think I've been headed in my mind anyway...I do however utterly reject the wide spread notion that write caching is bad. It simply isn't, and I think I'll explore the idea a little - this is tentative thinking. I'm not drawing a line under this and saying "And thus I have spake and thus it shall be for ever more". It's a point for other people to pick at.

    Write caching is seen to be bad in the event of either a power failure or a system crash. It means data that should have been written has not been written. But is that really true? If there's a power cut and you're in the middle of writing data you're just as ed as you were with a write cache as you were without a write cache. The data is half written, which is probably worse than not written at all (corrupt). Or there's the bluescreen... how are cached writes going to help there? Windows just shat all over the disk, cached or not there's still corruption.

    So when would cache really be worse than non-cached? Well, um. When non-cached would have started and finished writing while cached wouldn't. I can't think of a time when that would happen.... Cached writes are as far as I'm aware just simply buffered.

    Non-cached. You write 12345678 to the disk. The controller says ok hang on, writes 1... still hanging on... writes 2... still hanging on writes 3 etc
    Cached. You write 12345678 to the disk. The controller says ok, and writes 1, holds 2345678 in cache and says You're good to go! And then proceeds to keep writing 2,3,4,5,6,7,8 etc.

    The end result is not that data is written to the drive later... is it? It should still be written in the same amount of time, just the program you're using gets the go ahead signal earlier than it would have. Plus of course the data still exists in the cache after it's written so there may be a benefit if you reference back to it while it's still cached.

    I don't understand how a bluescreen or power cut would be any different with cached over non-cached. The exception of course would be wear levelling etc, where writes aren't simply being buffered, they're being stored and only written at a later date if it's deemed worthwhile to do so - temp files never actually get written. Sandforce works this way, but uses the NAND to cache rather than volatile RAM.

    One instance where I can see an issue is where, for example, you do a Windows update and a truckload of files are written, your cache speeds this up nicely, and Windows Update says - Reboot please and the system reboots, voiding all that unwritten data in cache before it gets written. I would imagine FancyCache and other cache have to have safeguards for this... at a guess they send "not finished yet" signals back to Windows as it's shutting down, until they are finished.

    So caching writes, in my opinion, is not the safety off, bleeding edge dangerous thing to do that people seem to think - at least until someone points out how horribly wrong I am!

  12. #37
    Xtreme Member
    Join Date
    Jun 2005
    Location
    United Kingdom of Great Britain and Northern Ireland
    Posts
    464
    From that, reading between the lines I get that cache isn't that good. Which is where I think I've been headed in my mind anyway...I do however utterly reject the wide spread notion that write caching is bad. It simply isn't, and I think I'll explore the idea a little - this is tentative thinking. I'm not drawing a line under this and saying "And thus I have spake and thus it shall be for ever more". It's a point for other people to pick at.

    Write caching is seen to be bad in the event of either a power failure or a system crash. It means data that should have been written has not been written. But is that really true? If there's a power cut and you're in the middle of writing data you're just as ed as you were with a write cache as you were without a write cache. The data is half written, which is probably worse than not written at all (corrupt). Or there's the bluescreen... how are cached writes going to help there? Windows just shat all over the disk, cached or not there's still corruption.

    So when would cache really be worse than non-cached? Well, um. When non-cached would have started and finished writing while cached wouldn't. I can't think of a time when that would happen.... Cached writes are as far as I'm aware just simply buffered.

    Non-cached. You write 12345678 to the disk. The controller says ok hang on, writes 1... still hanging on... writes 2... still hanging on writes 3 etc
    Cached. You write 12345678 to the disk. The controller says ok, and writes 1, holds 2345678 in cache and says You're good to go! And then proceeds to keep writing 2,3,4,5,6,7,8 etc.

    The end result is not that data is written to the drive later... is it? It should still be written in the same amount of time, just the program you're using gets the go ahead signal earlier than it would have. Plus of course the data still exists in the cache after it's written so there may be a benefit if you reference back to it while it's still cached.

    I don't understand how a bluescreen or power cut would be any different with cached over non-cached. The exception of course would be wear levelling etc, where writes aren't simply being buffered, they're being stored and only written at a later date if it's deemed worthwhile to do so - temp files never actually get written. Sandforce works this way, but uses the NAND to cache rather than volatile RAM.

    One instance where I can see an issue is where, for example, you do a Windows update and a truckload of files are written, your cache speeds this up nicely, and Windows Update says - Reboot please and the system reboots, voiding all that unwritten data in cache before it gets written. I would imagine FancyCache and other cache have to have safeguards for this... at a guess they send "not finished yet" signals back to Windows as it's shutting down, until they are finished.

    So caching writes, in my opinion, is not the safety off, bleeding edge dangerous thing to do that people seem to think - at least until someone points out how horribly wrong I am!

  13. #38
    Registered User
    Join Date
    Oct 2006
    Location
    Kirghudu, Cowjackingstan
    Posts
    462
    Quick look at stats after running 3x100MB CDM lol.



    It deferred 1Gb out of 357Gb in case of CDM and those writes were sequential to SSD... Will run CoH and ETW games while logging FC. See what it does... Interesting.

    Someone on OCZ tested out L1 and L2->SSD, rebooted and it seemed to preserve L2 data and then load it to L1. Is it more efficient than Win OS?
    Last edited by F@32; 03-10-2011 at 08:35 PM.

    Sony KDL40 // ASRock P67 Extreme4 1.40 // Core i5 2500K //
    G.Skill Ripjaws 1600 4x2Gb // HD6950 2GB // Intel Gigabit CT PCIe //
    M-Audio Delta 2496 // Crucial-M4 128Gb // Hitachi 2TB // TRUE-120 //
    Antec Quattro 850W // Antec 1200 // Win7 64 bit

  14. #39
    Xtreme Member
    Join Date
    Jun 2005
    Location
    United Kingdom of Great Britain and Northern Ireland
    Posts
    464
    That's pretty spanky if it's doing that behaviour. Certainly might even be better than the way Intel's Z implementation is (we don't know a great deal about that yet). Irrelevant for our purposes since we're using SSDs already (unless you want to cache MLC Vertex 3s with a Vertex 3 EX :P ).

    Edit : We've both just hit 400 posts. Spooky!

  15. #40
    Memory Addict
    Join Date
    Aug 2002
    Location
    Brisbane, Australia
    Posts
    11,651
    Quote Originally Posted by josh1980 View Post
    Sorry, but that's not all that battery backups are for. All it takes is one blue screen or system freeze and everything in your write cache is lost. Battery backups protect you from a loss of data from power, but provide no protection from system issues.

    I've seen that article before about memory caching, but it's a bit different when you have complete fail-over protection for your websites to ensure 99.999% uptime and you don't cache your writes. What they use for websites isn't necessarily smart for desktop use.
    Yeah the whole point is caching data which isn't critical if it's lost. Again it's the end user that makes that decision
    ---

  16. #41
    Registered User
    Join Date
    Mar 2010
    Posts
    60

    Volume Cache

    4R0 C300 256GB, LSI 9260 with FP. 12GB Dominator 2000.

    Mitch
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	cache.jpg 
Views:	2354 
Size:	157.7 KB 
ID:	112830  

  17. #42
    Registered User
    Join Date
    Sep 2009
    Posts
    51
    SSD is a non-volatile cache for HDD. If you use a hardware solution like maxcache or cachecade on your boot volume it absolutely boosts boot times. I don't see any reason to assume sandy bridge SSD cache would be volatile?

  18. #43
    Xtreme Member
    Join Date
    Jun 2005
    Location
    United Kingdom of Great Britain and Northern Ireland
    Posts
    464
    X68 cache should easily help speeds like boot times. I cannot think of any way that it could not. It is possible that it might be volatile - possible as in hardware wise, it could issue a SE at power down and wipe the whole drive - but entirely improbable. If they get the algorithms right then the 50GB cache of the 1TB drive will be 50GB of mainly apps and games and Windows and not media files and save games and documents.

    But pretty much anyone at XS forums will not be slumming it with a SSD for cache (they're limited to 60GB I think). We should have SSDs as boot volumes with all that stuff on it.

    For us Fancy Cache means a far smaller cache since it comes from main memory, and also every byte we give to Fancy Cache is a byte less that Windows gets to use to cache stuff with.

    My biggest worry for Fancy Cache is that if I give it 2GB to work with, then Windows is still going to have 3GB of cache. What's the chances of something residing in Fancy Cache being what I want to read and also not being in Windows cache? That's what makes me now think Fancy Cache would be nice for writes but not so good for read caching.

    I'm open to different opinions though!

  19. #44
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    The original windows cache is the culprit here, we dont want them both running simultaneously.
    For the most part the windows cache is quite nice, having it configurable like the fancycache would have been great though.
    -
    Hardware:

  20. #45
    Xtreme Member
    Join Date
    Jun 2005
    Location
    United Kingdom of Great Britain and Northern Ireland
    Posts
    464
    Yes.. but even if we could disable the Windows Cache... then Fancy Cache would either have to dynamically adjust the amount of memory it had available for caching... or we'd have to allocate a set amount and leave enough for Windows to use... when it wasn't using it, then it'd be wasted memory. If Fancy Cache was to work that way then it'd be the same as a configurable Windows Cache. That would be ideal.

    Nobody has yet commented on what I said above about write caching. I thought I was saying something rather controvertial (but correct) and nobody has any comments about it. It has been received wisdom that write caching is dangerous and I've posted above to say I don't think it is. I'm keen to find out if I've got something right, or if instead I'm an idiot because I'm missing something obvious.

  21. #46
    Registered User
    Join Date
    Oct 2006
    Location
    Kirghudu, Cowjackingstan
    Posts
    462
    My perspective is if it works and your system is stable - it's great and no more dangerous than Write Caching in Windows. We are utilizing things in our systems that make them faster but adding degree of risk, like overclocking RAM/CPU/GPU, RAID-0, RAID controllers with cache and no BBU, huge amounts of non-ECC RAM... For SSD's write caching is great. It eliminates a lot of redundant writes and does write combining for what is actually gets written thus making it easier for SSD controllers and NAND itself.

    I'm not concerned about FC and WC (Windows Cache) working at the same time. I barely ever see on 4GB equipped system my memory usage going near 3GB... For testing out FC, I will have 8GB of RAM. 3-4GB of that I will dedicate to FC with 30-60 sec flush interval. IMHO the rest 4-5GB for Windows itself is plenty.

    Peace...

    Sony KDL40 // ASRock P67 Extreme4 1.40 // Core i5 2500K //
    G.Skill Ripjaws 1600 4x2Gb // HD6950 2GB // Intel Gigabit CT PCIe //
    M-Audio Delta 2496 // Crucial-M4 128Gb // Hitachi 2TB // TRUE-120 //
    Antec Quattro 850W // Antec 1200 // Win7 64 bit

  22. #47
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I accidently un-plugged the SSD with the OS on it the other day. I was listening to an mp3 at the time. Like a headless chicken the OS/ mp3 kept running perfectly for ~10 seconds before blue screening.

  23. #48
    Registered User
    Join Date
    Sep 2009
    Posts
    51
    A developer from the FC forum says "windows caching is file-level caching, ours is block-level caching. They are not same."

    I'm with Halk in that I won't be slumming it w/HDDs on any of my workstations but for a server with 256gb of RAM and terabytes of storage it's simply too costly to go pure SSD. I'm currently evaluating hybrid controllers from LSI and Adaptec but it looks like FC may be a cheaper and more effective alternative. I will be doing some tests.

  24. #49
    I am Xtreme
    Join Date
    Oct 2005
    Location
    Grande Prairie, AB, CAN
    Posts
    6,140
    Quote Originally Posted by Ao1 View Post
    I accidently un-plugged the SSD with the OS on it the other day. I was listening to an mp3 at the time. Like a headless chicken the OS/ mp3 kept running perfectly for ~10 seconds before blue screening.


    I haven't had any issues w/ my machine running FancyCache yet.

  25. #50
    Xtreme Cruncher
    Join Date
    Oct 2008
    Location
    Chicago, IL
    Posts
    840
    I'm considering putting 16GB of RAM on my file server and putting superspeed's caching program on my file server. But read only. I've seen alot of systems go down completely because of a block level write cache. Enough that I'll never do it. You might save a few seconds here and there with a write cache, but losing several days to reinstall windows and all of your games and stuff just isn't worth it to me.

Page 2 of 4 FirstFirst 1234 LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •