PDA

View Full Version : Ideal stripe size for SSD RAID 0 running OS and apps?



Speederlander
07-31-2008, 06:26 PM
Dual SSD RAID 0 set-up running OS and apps. I assume the smallest stripe that you can get is optimal with SSD. Does anyone disagree?

tiro_uspsss
08-01-2008, 01:29 AM
sorry, bit OfT: are those 7 VRs attached to the Areca? If so, do they play nice together?

cheers! :toast:

Speederlander
08-01-2008, 08:47 AM
sorry, bit OfT: are those 7 VRs attached to the Areca? If so, do they play nice together?

cheers! :toast:

Yes, they are running off the Areca. When I get home from work tonight I'll give you a detailed discussion with some benches.

tiro_uspsss
08-01-2008, 08:52 AM
Yes, they are running off the Areca. When I get home from work tonight I'll give you a detailed discussion with some benches.

sweeeeet! look forward to it! :toast:

Buckeye
08-01-2008, 09:55 AM
64K worked best for me.

Anything higher had no real dif., and lower was worse performance.
At least with my setup.

Sunayknits
08-20-2008, 06:15 PM
Dual SSD RAID 0 set-up running OS and apps. I assume the smallest stripe that you can get is optimal with SSD. Does anyone disagree?

I just installed 3x Samsung SSD RAID 0 w/onboard nVidia controller and I'm wondering the same thing.

IIRC, SSD has more trouble with small writes. I would guess that a medium (64Kb or 128Kb) to large (256 or 512) stripe size would be optimal, perhaps even greater.

The problem of course is that this whole system was designed to overcome mechanical hard drive limitations, and in our case all it does is increase available bandwidth. Basically we're just muxing a bunch of memory on a SATA controller instead of the system board.

I think very soon we will see a paradigm shift in the way that data is stored. The whole concept of a 'disk' now seems a bit more vague and unnecessary with available memory that's soon going to approach DRAM speeds ...

Just thinking out loud :p:

I was going to do some quick tests tonight but man that crap takes so much time ...

In the meantime here are a couple links for you to peruse:

StorageReview RAID Level 0 Overview (http://www.storagereview.com/guide/singleLevel0.html)

Stripe Width and Stripe Size (http://www.storagereview.com/guide/perfStripe.html)

Please let us know the results of your testing :up:

eva2000
08-20-2008, 06:19 PM
I used 128K myself http://i4memory.com/showthread.php?t=8944

Buckeye
08-20-2008, 06:39 PM
I used 128K myself http://i4memory.com/showthread.php?t=8944

Thanks for that link and the test with the Raids Eva ! very interesting so say the least.

Not sure but am I completly missing the picture here. Forget for a moment that my Raid is 8x SSD's, lets say it's 4x SSD's. I also used 64k stripes, larger was no benifit, smaller and performance was worse.

Look at this graph, these is no spiking, nice smooth graphs.
http://img517.imageshack.us/img517/4936/hdtachareca32k8xmtronprfc4.jpg (http://imageshack.us)

Head over to DVNation and look at his benchs.
http://www.dvnation.com/benchmarks.html

I just see huge problems with these new SSD's and these graphs I keep seeing.

Keep in mind that I would love to see these newer SSD's perform well, I would jump on them in a heart beat. But to be honest I just don't see the performance in the new ones.

eva2000
08-20-2008, 06:47 PM
yeah well with your SSD = SLC based price per GB is much higher so is the performance comapred to MLC SSDs

Buckeye
08-20-2008, 06:50 PM
yeah well with your SSD = SLC based price per GB is much higher so is the performance comapred to MLC SSDs

Thats a great point and I will not disagree with that.

Its just to me the new MLC SSD's don't seem to stack up to the older SLC models performance wise, which btw the price for those has come way down.

Nanometer
08-20-2008, 11:22 PM
Wow almost 900 Megs of read and .1ms access time, I am so jealous.. Though I bet your cpu is a huge bottleneck with hard drive performance like that haha.

m^2
08-21-2008, 12:50 AM
Yes, small writes are an issue. Basically SSDs have to write whole erase blocks, not their parts. Erase block has usually 2MB. Sometimes 8 and I even heard a suggestion that one drive might have 16.
(I don't remember how many drives do you have, for this post I assume 9 in RAID 5).
When you have 64k stripe and a 0.5 MB file - it gets stripped into all drives. And you write total of 18 MB with performance of a single drive. You get performance increase when file size exceeds 2 MB - all drives write just one block, instead of one drive writing 2.If I get it correctly (which is not that sure), write performance should be about the same all the way up to stripe size=erase block size. Actually should get better slightly because controller has simpler job. Life expectancy would be the best in this case too.
You'll loose read performance though.

Buckeye
08-21-2008, 04:18 AM
Wow almost 900 Megs of read and .1ms access time, I am so jealous.. Though I bet your cpu is a huge bottleneck with hard drive performance like that haha.

Actually it's not a real problem for the CPU

This is Raid 5 running here
http://img396.imageshack.us/img396/2521/hdtuneqm3.jpg (http://imageshack.us)


Yes, small writes are an issue. Basically SSDs have to write whole erase blocks, not their parts. Erase block has usually 2MB. Sometimes 8 and I even heard a suggestion that one drive might have 16.
(I don't remember how many drives do you have, for this post I assume 9 in RAID 5).
When you have 64k stripe and a 0.5 MB file - it gets stripped into all drives. And you write total of 18 MB with performance of a single drive. You get performance increase when file size exceeds 2 MB - all drives write just one block, instead of one drive writing 2.If I get it correctly (which is not that sure), write performance should be about the same all the way up to stripe size=erase block size. Actually should get better slightly because controller has simpler job. Life expectancy would be the best in this case too.
You'll loose read performance though.

Yes that sounds very good, nice job. Depending on Raid size and the controller used in the SSD, number of drives etc the stripe size may vary, like tunning your SSD Raid :)

m^2
08-21-2008, 04:43 AM
What I wanted to tell is that when it comes to writes - the optimal stripe size would not depend on number of drives, RAID level, usage patterns, but SSD construction only.

Sunayknits
08-21-2008, 05:33 PM
I did some testing last night to try and determine optimal stripe size for my 3x Samsung 32Gb (Model MCBQE32G5MPP-0VA00) SSD array.

Using ASUS P5N32-E SLI Plus w/onboard nVRAID, I tried all stripe sizes available. I re-imaged the array with a basic XP SP3 install each time, gave the system a few minutes to settle, and ran the tests.

I'm not going to attempt an in-depth analysis of these results because frankly there are just too many factors I don't understand.

Obviously these controllers are optimized for hard drives, and use specific methods to deal with things like rotational and access latency. For example, IMHO NCQ (Native Command Queing) is absolutely useless for an SSD and could even be causing problems with this new paradigm. NCQ is meant to reduce problems caused by slow access times on hard drives and doesn't make sense at all with SSD.

I'm using 8Kb stripe size right now and it seems to be performing very well. I don't use this box for anything but gaming so it will take some time before I can say how it performs with day-to-day tasks ... :rolleyes:

I chose 8Kb because it has the highest avg. read, plus the highest CPU util (depending on which benchmark program you look at lol). The higher CPU util leads me to believe more data is being fetched at a faster rate from the on-board controller. Plus it just felt faster as I was using it.

I have since installed HL:Episode 2 and the load time between levels which was annoyingly long on my Raptors before, is now about a third of what it was. Whether this is worth $1200 I'm still not sure, but being on the bleeding edge is what Xtreme is all about right? :rofl: :confused: :rolleyes:

Enough ranting, here are my results:
*Note that these are READ tests only, writes could be an entirely different story ...

http://www.brimbo.com/temp/mozartbuild/3x_RAID0_Samsung_SSD_4Kb_StripeSize_HDTach.jpg
http://www.brimbo.com/temp/mozartbuild/3x_RAID0_Samsung_SSD_4Kb_StripeSize_HDTune.jpg
4Kb Stripe Size


http://www.brimbo.com/temp/mozartbuild/3x_RAID0_Samsung_SSD_8Kb_StripeSize_HDTach.jpg
http://www.brimbo.com/temp/mozartbuild/3x_RAID0_Samsung_SSD_8Kb_StripeSize_HDTune.jpg
8Kb Stripe Size


http://www.brimbo.com/temp/mozartbuild/3x_RAID0_Samsung_SSD_16Kb_StripeSize_HDTach.jpg
http://www.brimbo.com/temp/mozartbuild/3x_RAID0_Samsung_SSD_16Kb_StripeSize_HDTune.jpg
16Kb Stripe Size


http://www.brimbo.com/temp/mozartbuild/3x_RAID0_Samsung_SSD_32Kb_StripeSize_HDTach.jpg
http://www.brimbo.com/temp/mozartbuild/3x_RAID0_Samsung_SSD_32Kb_StripeSize_HDTune.jpg
32Kb Stripe Size


http://www.brimbo.com/temp/mozartbuild/3x_RAID0_Samsung_SSD_64Kb_StripeSize_HDTach.jpg
http://www.brimbo.com/temp/mozartbuild/3x_RAID0_Samsung_SSD_64Kb_StripeSize_HDTune.jpg
64Kb Stripe Size


http://www.brimbo.com/temp/mozartbuild/3x_RAID0_Samsung_SSD_128Kb_StripeSize_HDTach.jpg
http://www.brimbo.com/temp/mozartbuild/3x_RAID0_Samsung_SSD_128Kb_StripeSize_HDTune.jpg
128Kb Stripe Size


http://www.brimbo.com/temp/mozartbuild/3x_RAID0_Samsung_SSD_Optimal_StripeSize_HDTach.jpg
http://www.brimbo.com/temp/mozartbuild/3x_RAID0_Samsung_SSD_Optimal_StripeSize_HDTune.jpg
"Optimal" Stripe Size

m^2
08-21-2008, 10:44 PM
Spiky:shocked:
What happens at 16 GB???
Do you know how these benchmarks test drives? Because I wouldn't be surprised if optimal stripe size for them wouldn't be really that good in real world apps.

Sunayknits
08-22-2008, 08:42 AM
Spiky:shocked:
What happens at 16 GB???
Do you know how these benchmarks test drives? Because I wouldn't be surprised if optimal stripe size for them wouldn't be really that good in real world apps.

Yeah part of the problem is using the onboard controller, which is the sux compared to an Areca, HighPoint or 3Ware ... Alas I would have to replace my SB waterblock in order to use an add-in card (and I'm starting to think this may be worth the effort)

Also, I'm running the bench from the drives being benched, so I suspect that's where the big spike at 16Gb comes from.

Another couple of unknowns:

a) These benches are meant for hard drives and may actually be poor indicators of real-world SSD performance, as you mentioned.

b) The stripe width (# of drives in the array) may affect performance significantly. I'm guessing a power of 2 is probably better, but I have no facts to back this up ... I'm tempted to buy another SSD, along with an add-in card, but damn this hobby is expensive! :ROTF:

m^2
08-22-2008, 09:05 AM
b) The stripe width (# of drives in the array) may affect performance significantly. I'm guessing a power of 2 is probably better, but I have no facts to back this up ... I'm tempted to buy another SSD, along with an add-in card, but damn this hobby is expensive! :ROTF:

I don't think so, this shouldn't make a significant difference unless you have cluster size bigger than stripe size. Maybe fits some controller's structures better...but anyway, the bigger the better. :up:

CrimInalA
09-04-2009, 05:23 AM
Everyone seems to use different stripesizes .

I am wondering which is the best combination possible for stripesize as well as clustersize on a raid 0 SSD environment . (In my case 2 160gb intel SSD G2)
There must be a "best" value somewhere no ?

yngndrw
09-04-2009, 05:56 AM
I'm using 128K with two 80GB G2s on a Highpoint RR3520.

Boot-up time isn't as amazing as some user's on here but I suspect that's the use of the RAID card over on-board controllers. (Which I'm using more for reliability.)

While it probably doesn't net the very best performance, I have to point out that it doesn't really matter that much. They are still lightening fast and I can still virus scan my C drive in 2 minutes 20 seconds. Also shut-down time is very impressive.

In short, I'm sure you'll be happy with whatever you go for.

CedricFP
09-04-2009, 06:38 AM
From what it seems, on a single SSD 32/64 look like the best stripe sizes, and on RAID, 64/128.

I'm using 64 on a single vertex 60 and am getting subpar sequetial write/reads (at 190/95 respectively) as opposed tot he 230/130 figures advertised on the box or whatever. XP 64 XP2.

CrimInalA
09-04-2009, 06:41 AM
Yes it seems indeed that most people recommend a 64k stripesize and leave clustersize at default which is 4k if I'm not mistaking .

I will go with those sizes and do a fresh win7 install this evening .

lowfat
09-04-2009, 08:03 AM
From what it seems, on a single SSD 32/64 look like the best stripe sizes, and on RAID, 64/128.

I'm using 64 on a single vertex 60 and am getting subpar sequetial write/reads (at 190/95 respectively) as opposed tot he 230/130 figures advertised on the box or whatever. XP 64 XP2.

You don't use stripe size w/ a single. Stripe size is for only for RAID.

Yes it seems indeed that most people recommend a 64k stripesize and leave clustersize at default which is 4k if I'm not mistaking .

I will go with those sizes and do a fresh win7 install this evening .

128k or larger is what you want. I would go in to it but I am lazy :p: But the short version is that the erase block page is generally 64k. So everytime a write is done, 64k needs to be written to the drive. Now multiple that by two since you have 2 drives and you have 128k.

CedricFP
09-04-2009, 03:47 PM
You don't use stripe size w/ a single. Stripe size is for only for RAID.



i meant alignment but i thought ultimately they ended up the same thing.

Computurd
09-04-2009, 07:11 PM
i would highly recommend as large as you can get..the bigger the better, i use 1MB

Computurd
09-06-2009, 01:48 PM
OK QOTD what stripe size woul you use for 8 drives, ssd? for the best random read access"?

yngndrw
09-07-2009, 09:39 AM
i would highly recommend as large as you can get..the bigger the better, i use 1MB
This is what I was considering as the drives already have lightening fast random access times and can only really benefit from sequential access improvements, but Gilhooley in the thread I made suggested that the current generation controllers were designed for normal drives and around 128K stripes. I didn't feel like testing every combination so I just decided to use that.

SteveRo
09-07-2009, 03:08 PM
OK QOTD what stripe size woul you use for 8 drives, ssd? for the best random read access"?

Good evening Computurd, The answer to your question depends greatly on the controller and the drives employed. 64 or 128 is probably the best but better to test the performance of your array at as many of the stripe size options that your controller provides. Please, please report back your results!

Biker
09-07-2009, 04:04 PM
128k is the sweet spot for me with 2 SSD drives for OS / apps.

Computurd
09-09-2009, 01:47 PM
After much testing int his area with an 8 drive array i have come to find that the larger the better. The small file transfer sped stays the same no matter what stripe you use, however, the tall sequential will suffer from smaller stripe sizes, at least on a 9260-8i

Biker
09-09-2009, 04:15 PM
After much testing int his area with an 8 drive array i have come to find that the larger the better. The small file transfer sped stays the same no matter what stripe you use, however, the tall sequential will suffer from smaller stripe sizes, at least on a 9260-8i

Agreed.

However he is discussing a 2 drive OS configuration here and in my experience sequential transfer speeds scale up with the number of drives and stripe size...

Eg.

8 drives = 1mb stripe (your setup)
4 drives = 512k
2 drives = 256k (non OS)
2 drives = 64k or 128k (OS)

As ever testing on your own system with your own specific setup / usage patterns is the best way forward.

ic3m4n2005
09-14-2009, 07:23 AM
Have three 60GB Vertexes incoming. Will attach them to my Areca 1680ix and build a Raid 0 array out of it.
So i'm going to use 64 or 128k for OS, games, and everything ?!?

Biker
09-14-2009, 07:27 AM
Since you are mixing games and OS i'd say 128k....

F@32
12-24-2009, 04:03 PM
Here's quick 2x30Gb 1.3FW OCZ Vertex RAID-0 on ICH10R 1.20 tests: 16K vs 128K stripe size.

http://i214.photobucket.com/albums/cc176/dosaaf/VertexOnGA/th_OCZ16-128K_HDTR.png (http://i214.photobucket.com/albums/cc176/dosaaf/VertexOnGA/OCZ16-128K_HDTR.png)

http://i214.photobucket.com/albums/cc176/dosaaf/VertexOnGA/th_OCZ16-128K_CDM.png (http://i214.photobucket.com/albums/cc176/dosaaf/VertexOnGA/OCZ16-128K_CDM.png)

Computurd
12-24-2009, 05:31 PM
hey F@32 would you mind running it at higher stripe ifn you can?

F@32
12-24-2009, 05:42 PM
hey F@32 would you mind running it at higher stripe ifn you can?

ICH10R is limited to 128K stripe size. I also just finished installing Win7 and all apps on it, sorry :(

Computurd
12-24-2009, 06:20 PM
sorry lol i need to pay attention :)

SteveRo
12-28-2009, 03:11 AM
The optimum strip size will vary by controller, probably also by controller/drive combination used and by what raid level you are using.
For example - turd is right that for the LSI 9260-8i - 1MB is best - and it seems to be best for both vertex and acards.
For the Areca 1231ML I think the best strip size for the acards in R0 is 64 - I will verify.

SteveRo
12-28-2009, 05:09 AM
For the 1231ML-2G/8xacardR0 combo 64 is still the best - confirmed using iometer and pcmark vantage HDD test.
Smaller block sizes result in lower small files transfer in iometer - larger block size (128) results in HDD test score dropping from 53K to 48K.