Quote Originally Posted by audienceofone View Post
^^
Virtualrain, you asked what people used and if choice was based on evaluation. I’ve tried to show you the evaluation I undertook to come to my conclusion. Do you think my evaluation methods were flawed? Why do you think my IOmeter results were worse with a 32k stripe? When I ran random read & writes I only got 5.24MB/s 4k writes with a 16k stripe as opposed to 199.38 with a 128k stripe.

If I’m missing a trick I’d like to know. If you think my evaluation methods are wrong please explain why and what you think would be a better way of testing. I can reset my stripe size on line so I’d be happy to run whatever test you think would be relevant.
I really don't know what to think. This thread is a mixed bag of theory (some of which seems old-school and/or irrelevant), evaluation techniques (if any) and varying results (I personally found three different benchmarks that each favored a different stripe size).

Your results seem incredibly bizarre. Does it make sense to you that stripe size has a 40x impact on I/O performance? While I don't doubt your results, there must be something we don't understand here or that benchmark is seriously flawed in this application.