System info: Supermicro X9SAE-V motherboard, Intel Xeon E3-1275V2 CPU, 32GB 1600MHz ECC memory, LSI 9271-8i at PCIeV3 X8 (with FastPath), 8 OCZ Vertex3 120GB SSD's, RAID0
Raw disk performance as measured by Gnome Disk Utility is very good, >= 4.0 GB/s sustained read/write
LSI config that seems to work best for raw disk IO, 256KB stripe, no read-ahead, always write-back, Direct IO
So far, I have tried a few different stripe sizes (64K, 128K, 256K) with the XFS file system using what I believe is optimal XFS stripe unit and width:
sunit = stripe size / 512 (usually get a warning rfom mkfs.xfs that specified stripe unit not the same as volume stripe unit 8)
swidth = sunit * #drives
Linux blockdev read-ahead is the main system parameter I have varied, a value of 16384 seems to work well. For FastPath + SSD's, LSI recommends to set the LSI config read-ahead to no read-ahead. I have tried it both ways and it does not seem to make a difference. The Linux blockdev read-ahead make a big difference.
I have mainly used iozone as my file system performance benchmark as it has worked well for me in the past.
I am looking for suggestions for optimizing the Linux file system performance, preferably from someone that has worked with a similar setup in Linux. Right now, my sequential read performance is decent, approaching 4 GB/s, but my write performance seems to have a ceiling of about 2.5 GB/s whether I use a RAID0 of 5 SSD's or 8 SSD's.
I have seen some recommend ext4 instead of xfs. Also, I know there are a lot of Linux system vars that may be applicable. My goal is to get as close as possible to 4 GB/s sequential read/write. I don't have a lot of time right now to try every possible configuration.


Reply With Quote
