Page 3 of 4 FirstFirst 1234 LastLast
Results 51 to 75 of 76

Thread: FancyCache - software RAM based disk cache

  1. #51
    Xtreme Member
    Join Date
    Jun 2005
    Location
    United Kingdom of Great Britain and Northern Ireland
    Posts
    464
    Quote Originally Posted by gordy View Post
    A developer from the FC forum says "windows caching is file-level caching, ours is block-level caching. They are not same."

    I'm with Halk in that I won't be slumming it w/HDDs on any of my workstations but for a server with 256gb of RAM and terabytes of storage it's simply too costly to go pure SSD. I'm currently evaluating hybrid controllers from LSI and Adaptec but it looks like FC may be a cheaper and more effective alternative. I will be doing some tests.
    Even if they are not the same, I don't believe the block level vs file level would make a significant difference would it? At the end of the day both Windows and FancyCache would cash the same data, all be it in different ways.

    I feel your second usage pattern is where FancyCache would really shine. Windows doesn't seem to cache media files, at least for me, FancyCache would. So it goes along with that (or at least it's not much of a greater assumption) that FancyCache would be good at caching database/workfiles etc etc while Windows wouldn't.

    All guesswork though!

  2. #52
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Did some low-effort PCMark testing on my laptop. Focused on overall score, Productivity score (usually my favorite), and the HDD score.

    Laptop specs:
    Lenovo T500
    2.4GHz C2D
    4GB DDR2 1066MHz (dual channel)
    ATi HD3650
    Seagate 160GB 5400RPM 2.5" drive
    Win7 x64

    Tested 0 cache, 256MB L1, 512MB L1, 1024MB L1, and 2048MB L1.

    Settings: read/write, 16kb blocks, LFU-R, 30s deferred write.

    Each test was run three times and the average was taken. Between each size I would stop caching and restart the whole system. There was no consistent trend with run order (other than the fact that the 2nd run at each cache size was almost always worse than the 1st run). I was expecting to include graphs with how the cache gets better over time, but that just wasn't the case--maybe when it's turned on it already knows which blocks are best to cache? I'm not a storage guru, but I do know how to hit the "Run Benchmark" button







    Kind of lacking the zero impact worst-case-scenario I was hoping for; also seems all the gains come from just a few tasks. Kind of blah for an end-user, unless they know they have a specific use for it, IMO.

    Larger definitely seems better though....and no, I didn't really play with the settings at all, just the L1 cache size.

    I can run more tests on Sunday if you guys want, just tell me what to do (as long as it's not psycho amounts of work) and I'll see if I can get it done

  3. #53
    Xtreme Cruncher
    Join Date
    Oct 2008
    Location
    Chicago, IL
    Posts
    840
    Quote Originally Posted by Vapor View Post
    Kind of blah for an end-user, unless they know they have a specific use for it, IMO.

    Larger definitely seems better though....and no, I didn't really play with the settings at all, just the L1 cache size.
    You just summed up everything there is to know about block level caching right there. It's not very useful for end users unless they have a specific use for it and larger is better.

    I find this thread fascinating because when I found Superspeed's SuperCache 4 or 5 years ago I figured out everything this thread has said. I also figured out that with write caching enabled every single machine I ran it on would slowly become unstable until eventually it would no longer function. Within 6 months I had abandoned the block level cache for write applications. Later I abandoned it altogether because I couldn't even prove it added any performance.

    Block level caching has been around since the 1980s. Anyone know why block caches aren't used in Windows? I'm surprised nobody has asked this question because with the amazing improvements we "see" you'd think Microsoft would have jumped on this by now, right? Microsoft bailed on block level cache before Windows 95 came out. It was determined that block cache wasn't very useful for users and only in a few specific situations would it add value(BUT the few places that it would add value it would add tremendous value). But the risks far far outweighed the gains that most users wouldn't see.

    Block caching is so useful for situations where you might request nearby data on the drive in quick succession. If you request block 20 on the hard drive, the block cache might request the hard drive send 19 through 22 and store it in the cache. This is useful for situations where data is defragmented and the whole file is close together. Remember all that stuff about defragmenting SSDs is bad. That means that a block cache might not provide much benefit if your files aren't actually stored in one contiguous block. Additionally, since you're adding more reads that your disk is performing, you might actually take a performance hit on an SSD because your drive is much busier than it would otherwise be. Your request for data from an application might have to wait for a bunch of other read operations that your block cache has requested.

    Block caching is basically a very "dumb" cache. It doesn't know what the data is or if it might be used again. It simply watches reads and writes from the sectors. It has no idea what a "file" is, what a "partition" is, or even what a "file system" is. File caching is smart because it watches for when files are opened and closed by the OS and uses that data to try to predict the next read.

    This bandwagon of block level caching is just like the SSDs must be faster because their benchmark says so. Benchmarks don't tell the whole story and never will. You have to be smart enough to identify what characteristics are important for the intended application of the hardware, then buy accordingly.

    Even back with DOS 6.22 Microsoft bailed on write caching. When DOS 6.22 came out Microsoft disabled the write cache by default(It was enabled when 6.21 was released, then disabled for 6.22). Microsoft realized the error of their ways with write caching because too many people were complaining that 6.21 was corrupting their computer. It wasn't 6.21 that was to blame. It was the end users not exiting their applications to a dos prompt correctly before flipping the power switch. Microsoft knew that they couldn't tell end users they were using their computers improperly(who would listen to microsoft anyway.. even if they were right?), so instead chose to disable write cache because users just weren't smart enough to handle it.

    Even now, Windows Server 2008 doesn't enable the write cache by default unless you have a battery backup attached to the computer. Personally, from experience with write cache problems, I don't enable the write cache on my file server at home because I want stability over performance.

    Block level caching will be around the forum for a few months. Those of us that do deep level hardware and software optimizations(mostly people with Master's degrees and above) will know when a block level cache is valuable and will use it accordingly. People will eventually smarten up that block caching isn't all it's cracked up to be and the hype was completely overrated. Don't feel bad forum. I fell for the same trap 5 years ago. I learned alot from the experience, just as you will. We can ride the boat together. I promise

  4. #54
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    A few points on Windows vs. FC, file vs. block cache.
    Windows caches both reads and writes at file level and somewhat at block level (trust me, your storage would be abysmally slow if NTFS did not have some block level cache for the MFT).
    The good thing about file based write caching is that it can ONLY result in a loss of data for THAT file. Block based level cache can result in the MFT not being written and that can cause more issues.
    Also the reason Windows uses file based caching are certain memory mapping features which would not work otherwise securely.

    I would presume FC will have an associated file system filter which would handle file level caching in case the volume is cached. A lot can be done at that level to make the most of available memory and not have double caches.
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  5. #55
    Xtreme Cruncher
    Join Date
    Oct 2008
    Location
    Chicago, IL
    Posts
    840
    Quote Originally Posted by alfaunits View Post
    A few points on Windows vs. FC, file vs. block cache.
    Windows caches both reads and writes at file level and somewhat at block level (trust me, your storage would be abysmally slow if NTFS did not have some block level cache for the MFT).
    The good thing about file based write caching is that it can ONLY result in a loss of data for THAT file. Block based level cache can result in the MFT not being written and that can cause more issues.
    Also the reason Windows uses file based caching are certain memory mapping features which would not work otherwise securely.

    I would presume FC will have an associated file system filter which would handle file level caching in case the volume is cached. A lot can be done at that level to make the most of available memory and not have double caches.
    There is no block caching. The MFT is a file on your disk, and is cached just like any other file. Microsoft does give MFT caching a higher priority though. You can completely disable file caching. There is a registry entry for it, but I will warn you, it took me about 20 minutes to get back into regedit to enable it again later

  6. #56
    Registered User
    Join Date
    Sep 2009
    Posts
    51
    File cache is a great way to boost "desktop" performance when you're dealing with tons of small files. I've read in the performance tuning literature that Windows will even store heuristics on clusters of files that are usually opened together so if you access one the others will be prefetched.
    Block level cache improves performance in scenarios dealing with giant files - database files, virtual machine hd files, etc. I wouldn't say it's a bad thing although "dumb" may be technically accurate since it's implemented at a level where it's oblivious to files. I don't think file level cache is going to lend much help to mssql or vbox. I wouldn't say that stuff is boring for an end user since I'm an end user and I run mssql and vbox as well as many other applications that maintain very large index files that benefit from block level cache every day.
    I usually entrust block level caching of my data to my trusty Areca card with its 4GB ram and BBU however I find the prospect of a software solution very interesting since my CPU is much faster than the onboard ROC and there's much more RAM available. I'd say a sophisticated host-based block-level caching storage filter driver is very welcome here. Me thinks DOS 6.1 just gave Josh a bad rub

  7. #57
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I guess the success of cache improved performance can be monitored by looking at the overall average of the percentage read system cache IOP counts, divided by the read IOP counts.

    Here I monitored Black Ops multiplayer. I reloaded the same map multiple times and recorded overall performance and I/O operations specifically related to Black Ops. Whilst only monitoring the Black Ops folder any OS related background I/O's are ignored, which can of course impact the I/O performance of the specific processes being monitored. With that caveat here is what I found:

    • SSD with FancyCache = 86%
    • SSD without FancyCache = 86%
    • HDD with FancyCache = 87%
    • HDD without FancyCache = 85%

    Both Windows and FancyCache were able to satisfy a high percentage of I/O's specifically related to the Black Ops folder in cache, regardless of the storage medium

    What I could observe:

    • HDD 1st map load - HDD was significantly slower with or without FancyCache when compared to SSD. Subsequent map loads were much faster with or without FancyCache. Almost as fast as with SSD.
    • SSD 1st map load - No real difference between the 1st load and subsequent loads. FancyCache made no observable difference to game load or game play.

    This was a real disappointment and not what I was hoping to find.

    If I look at the 6 longest Physical Device Read Time Totals this is what I find.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	the one.png 
Views:	3684 
Size:	121.9 KB 
ID:	112878  
    Last edited by Ao1; 03-13-2011 at 04:39 AM.

  8. #58
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    I think I found a "bug" btw in FancyCache. If you use a setting that results in a blue screen when you re-boot you have to uninstall FancyCache in safe mode to be able to get back in to Windows. In safe mode you can't change the settings. When you reinstall you have to reboot and it seems to then default to the settings that caused the blue screen rather than the app defaults.

  9. #59
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Quote Originally Posted by Vapor View Post
    Tested 0 cache, 256MB L1, 512MB L1, 1024MB L1, and 2048MB L1.
    ...
    Larger definitely seems better though....and no, I didn't really play with the settings at all, just the L1 cache size.
    ...
    Great charts

    So, > 1GB cache is needed to make a difference and still there are drops below the baseline, could be hardware related.

    I wasn't expecting anything spectacular and even if you switched to using an SSD it would probably have resulted in smaller gains.

    I'll stick to my raid controllers and the default windows cache, I need the memory for my VMs. (Windows caching is dynamic, this is not)

    edit:

    @Ao1
    The bug is almost certainly an installer bug, there are some leftover registry entries.
    Last edited by Anvil; 03-13-2011 at 05:22 AM.
    -
    Hardware:

  10. #60
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Look like it. I tried cleaning the registry, but it did not find/ delete the offending keys. Once it happens the only way I could find round it was a fresh install. Luckily I created an image just before I installed it.

  11. #61
    Xtreme Cruncher
    Join Date
    Oct 2008
    Location
    Chicago, IL
    Posts
    840
    Quote Originally Posted by gordy View Post
    Me thinks DOS 6.1 just gave Josh a bad rub
    Actually, I never used 6.21. I learned of the chronic problems people were having from all of my searching for why machine kept having corrupted system files and MFT.
    Last edited by josh1980; 03-13-2011 at 02:08 PM.

  12. #62
    Xtreme Member
    Join Date
    Jun 2005
    Location
    United Kingdom of Great Britain and Northern Ireland
    Posts
    464
    Don't forget we were all playing with Drivespace and related compression utlities... and fighting with config.sys to reduce the memory footprint to get Doom to run better

  13. #63
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    Quote Originally Posted by josh1980 View Post
    There is no block caching. The MFT is a file on your disk, and is cached just like any other file. Microsoft does give MFT caching a higher priority though. You can completely disable file caching. There is a registry entry for it, but I will warn you, it took me about 20 minutes to get back into regedit to enable it again later
    FATxx does not store the table as a file and it is still cached.
    NTFS caches the MFT differently from other files.

    On the topic of large files, it is always better for the application not to use the windows cache but rather internally cache the files to improve performance. The WC is a good thing in general, but for special purposes there are better optimizations.
    SQL Server for example internally manages the cache on the DBs.
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  14. #64
    Xtreme Cruncher
    Join Date
    Oct 2008
    Location
    Chicago, IL
    Posts
    840
    Quote Originally Posted by Halk View Post
    Don't forget we were all playing with Drivespace and related compression utlities... and fighting with config.sys to reduce the memory footprint to get Doom to run better
    Those were the days. Everyone trying to get every single byte of free memory the could from the first 640K of memory (Conventional Memory) and move everything they could into the UMB (upper memory blocks), Extended and Expanded memory.

  15. #65
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I've spent countless hours optimizing memory using Q.E.M.M and dont forget SMARTDRV
    I've created a few TSR's as well, not sure if I miss those days but it was fun while it lasted.
    -
    Hardware:

  16. #66
    Xtreme Cruncher
    Join Date
    Oct 2008
    Location
    Chicago, IL
    Posts
    840
    Quote Originally Posted by alfaunits View Post
    FATxx does not store the table as a file and it is still cached.
    NTFS caches the MFT differently from other files.

    On the topic of large files, it is always better for the application not to use the windows cache but rather internally cache the files to improve performance. The WC is a good thing in general, but for special purposes there are better optimizations.
    SQL Server for example internally manages the cache on the DBs.
    Yes, but it's still not block caching for the FAT system. The cache stores the entries in a table in memory for quick access.

  17. #67
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    It is organized as block level cache, because that is much more optimized for lower memory fragmentation.
    If you mean the cache is not done below the FS, you're right, the FS itself does the caching.
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  18. #68
    Registered User
    Join Date
    Sep 2009
    Posts
    51
    Quote Originally Posted by alfaunits View Post
    SQL Server for example internally manages the cache on the DBs.
    Yes but ultimately SQL performs a checkpoint and unloads a barrage of IO onto the subsystem. Whether SQL becomes bottlenecked at this point depends if your subsystem can handle it. In my experience Areca controllers w/onboard RAM can buffer an entire checkpoint and keep SQL running smooth while the drives catch up over time. SQL performance can also be bound by the write latency of your IO subsystem as it performs write-ahead logging to maintain ACID compliance. This is where write caching can tremendously boost performance but of course something like FC undermines what SQL is trying to do (not necessarily a bad thing since there's no option to turn off WAL).

  19. #69
    Registered User
    Join Date
    Mar 2010
    Posts
    60
    I just discovered that attempting to run Intel's toolbox optimizer while running FC disk cache results in a flag stating the optimizer will not run in a raid configuration.

    Mitch

  20. #70
    Xtreme Cruncher
    Join Date
    Oct 2008
    Location
    Chicago, IL
    Posts
    840
    Quote Originally Posted by alfaunits View Post
    It is organized as block level cache, because that is much more optimized for lower memory fragmentation.
    If you mean the cache is not done below the FS, you're right, the FS itself does the caching.
    No, it's not stored as block cache raw data. It's stored as organized FAT entries in RAM. But it doesn't matter. I know this because I got some inside info. There's no way to test or prove one of us right anyway. We'll just have to agree to disagree. It's also mute because very few people use FAT these days.

  21. #71
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    We can see easily by looking at FAT code which is freely available from MS
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  22. #72
    Xtreme Cruncher
    Join Date
    Oct 2008
    Location
    Chicago, IL
    Posts
    840
    I'm talking about the method of caching the data in RAM, not the formatting of the data in the disk. The FAT is basically a huge spreadsheet. It's similar to the MFT. I have no clue what you are talking about to be honest. We might be arguing over the same side of the story and not even know it. I'm definitely open to you explaining though because I'm really confused as to what you are trying to talk about

  23. #73
    Xtreme Addict
    Join Date
    Jun 2006
    Posts
    1,820
    FAT or any file system needs to allocate memory dynamically for anything it wants to cache. If the allocated blocks are same size, it provides the least memory fragmentation.
    The next best thing is allocations of a multiple of a page size (4K * n).
    The worst case scenario would be to randomly "cache" FAT entries in memory as connected lists of alternate sizes.

    I have seen a memory dump from a system where only some 300MBs were allocated by drivers but 1.5GB of memory was used, due to fragmentation, not giving ANY space for new allocations. (this was confirmed by a MS tech guy)

    So, the best thing for a file system is to cache the table on a "block" level, i.e. in multiples of page size.
    The Windows cache manager does this, as the cache is aligned to page-size boundary.

    So, the best option for the file system is to cache an entire table block, or a larger structure, rather than allocating structures for each entry. Hence I mean block-based cache.
    P5E64_Evo/QX9650, 4x X25-E SSD - gimme speed..
    Quote Originally Posted by MR_SmartAss View Post
    Lately there has been a lot of BS(Dave_Graham where are you?)

  24. #74
    Xtreme Addict
    Join Date
    May 2007
    Location
    Europe/Slovenia/Ljubljana
    Posts
    1,540
    Set the test size to a larger in size than the cache. If the test file is 100MB and you have a 512MB cache, speeds will go to 4 digit values. If they are the same or larger than the cache, you'll get the real speed...
    Intel Core i7 920 4 GHz | 18 GB DDR3 1600 MHz | ASUS Rampage II Gene | GIGABYTE HD7950 3GB WindForce 3X | WD Caviar Black 2TB | Creative Sound Blaster Z | Altec Lansing MX5021 | Corsair HX750 | Lian Li PC-V354
    Super silent cooling powered by (((Noiseblocker)))

  25. #75
    Xtreme Enthusiast
    Join Date
    Dec 2002
    Posts
    758
    fancycache is definitely beta, when you set cache for disk 0, it uses that amount of memory plus a little more, but for disk 1, it uses more than double, disk 2, more than triple. Not three drives, triple the memory usage, but three drives, 6x the memory usage.

Page 3 of 4 FirstFirst 1234 LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •