
Originally Posted by
stevecs
Did you run xdd against a couple ram disk partitions for comparison?
I now have some results from the xdd against the ram disks. I created a very basic LVM striped device, using:
Code:
pvcreate /dev/ram0 /dev/ram1
vgcreate lvram /dev/ram0 /dev/ram1
lvcreate -i2 -I128 -L5G -n ramdisk lvram
mkfs.jfs /dev/lvram/ramdisk
mount /dev/lvram/ramdisk /ramdiskstr/ -o noatime,nodiratime
I let xdd work on a 4GB file, testing an amount of data equal to ~16GB per pass:
Code:
dd if=/dev/zero of=$XDDTARGET bs=1M count=4096
sync ; sleep 5
$XDD -verbose -op read -target ${XDDTARGET} -blocksize 512 -reqsize $RQ -mbytes 16384 -passes 5 -dio -seek random -seek range 4000000 -queuedepth $QD
The results are this:
Code:
T Q Bytes Ops Time Rate IOPS Latency %CPU OP_Type ReqSize
^MTARGET PASS0001 0 128 17179869184 2097152 17.714 969.824 118386.69 0.0000 2.73 read 8192
^MTARGET PASS0002 0 128 17179869184 2097152 18.182 944.893 115343.37 0.0000 2.16 read 8192
^MTARGET PASS0003 0 128 17179869184 2097152 18.651 921.124 112441.89 0.0000 2.21 read 8192
^MTARGET PASS0004 0 128 17179869184 2097152 18.490 929.159 113422.79 0.0000 2.20 read 8192
^MTARGET PASS0005 0 128 17179869184 2097152 18.171 945.431 115409.08 0.0000 2.28 read 8192
^MTARGET Average 0 128 85899345920 10485760 87.133 985.837 120341.44 0.0000 2.14 read 8192
While iostat -mx dm-3 ram0 ram1 15 showed this:
Code:
avg-cpu: %user %nice %system %iowait %steal %idle
0.62 0.00 14.41 0.00 0.00 84.97
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
ram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
ram1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-3 0.00 0.00 115517.87 0.13 902.48 0.00 16.00 0.38 0.00 0.00 37.97
I do wonder why there's no traffic reports for ram0 and ram1, but I guess this is because of the way these device are implemented.
For comparison, the same xdd test on a single (non lvm) ram disk, with an ext2 fs (a jsf formatted ram disk couldn't be mounted on my system):
Code:
mke2fs -m 0 /dev/ram3
/dev/ram3 /ramdisksing/
gave the following results:
Code:
T Q Bytes Ops Time Rate IOPS Latency %CPU OP_Type ReqSize
^MTARGET PASS0001 0 128 17179869184 2097152 17.537 979.620 119582.57 0.0000 3.06 read 8192
^MTARGET PASS0002 0 128 17179869184 2097152 18.119 948.153 115741.34 0.0000 2.26 read 8192
^MTARGET PASS0003 0 128 17179869184 2097152 18.365 935.447 114190.29 0.0000 2.39 read 8192
^MTARGET PASS0004 0 128 17179869184 2097152 17.933 957.981 116941.10 0.0000 2.27 read 8192
^MTARGET PASS0005 0 128 17179869184 2097152 18.004 954.229 116483.08 0.0000 2.21 read 8192
^MTARGET Average 0 128 85899345920 10485760 85.710 1002.204 122339.31 0.0000 2.20 read 8192
On earlier tests with striped and single ram disks, bm-flash reported 190k resp. 200k. My colleague set up the RAM disks then so I do have to ask how he exactly did that. In xdd however there seems to be no difference in performance. Since /dev/ram3 didn't show up in iostat there's nothing to report there.
I'll look at the performance for two RAID6 arrays on two 1231s next and will try to set the journal on a complete different disk (there is 1680 in the machine with 4 SSDs, I'll try to put the journal there, alternatively there are also 2 SAS HDDs on that 1680 that could it)
Bookmarks