stripe size = the contiguous space assigned to each drive (what you set in the controller). stripe width is the # of drives in your array TIMES the stripe size. Now a data stripe width is the number of data drives per array (ie, raid-5 of 5 drives would have a stripe width of '5' or a data stripe width of 4, a raid-6 of 5 drives would have a stripe width of 6 and data stripe width of 4 (2 drives for parity).
Now for lvm you have to worry about your 192K initial metadata. So as you calculated if you have a raid-5 of 5 drives (4 data) your stripe width would be 32. Since 192KiB is divisible by 32 (6 stripes) you want to start your array AT 192K (so in this example you are alligned right out the gate you don't need to set a metadatasize argument). But you want your LVM stripe size (between arrays) to be a multiple of your individual array data stripe width (ie, 32KiB) That way you are not hopping between arrays and only writing partial stripes.
Now for argument if you have a stripe width of 128KiB for your array (same 5 drive raid-5) that would be 512KiB for data stripe width. So you want to use your metadatasize offset to START your partition at 512KiB (or multiple thereof), and use your lvm striping (between your arrays) to also be 512KiB (data stripe width of a single array) or a multiple thereof.
so
Code:
pvcreate --metadatasize X /dev/sdb /dev/sdc
# X 511 or whatever it takes to start at 512KiB with padding)
vgcreate ssd -s Y /dev/sdb /dev/sdc
# Y here is your physical extent size which you probably want to be a multiple of your data stripe width default is 4MiB and that's fine as we our data stripe widths here are evenly divisable to that for 8K (32KiB) or 128K (512KiB)
lvcreate -i2 -IZ -L447G -n ssd-striped ssd
# Z here should be a multiple of your data stripe width (32KiB or 512KiB)
make sense?
Bookmarks