Wow, nice numbers tilt.

I just have to ask, are you running it as a integrated raid-0 or as pass-through software RAID?
I see according to CDM 3.0 you get 120K IOPS read, but AS SSD gives you 175K.

CDM 3.0 4KB QD32 test is single-thread, while AS SSD 4KB QD64 is 64 threads. You may have run into a CPU limit at 120K IOPS in CDM that AS SSD doesn't have.
8x25-M at 30-40K IOPS each should get you 240-320K IOPS, but you maxed out at 175K.
On the other hand, 175K IOPS @ QD 64 = 2734 IOPS pr QD.

Would you run a couple of quick IOmeter setup if i provide you with the config? (or you can make it yourself)
First will be 1 worker, 4KB random read, 4KB alligned, 1GB testfile (since you don't have cache), queue depth at exponential stepping 1-256 with 2^n stepping, 1 sec ramp time, 10-15 sec run time (since you don't have cache). It will total 9 runs and take ca 2-3 minutes.
Second will be #CPU cores (logic) workers, same access spec and test size/lenght, but QD exponential stepping 1 - {256/#workers} (say 8 "cores" makes 8 workers, each with QD 1-32, giving 6-7 runs).
Results should be fairly accurate and paint a picture of IOPS scaling. The reason for QD 1-256 is to give all SSDs a QD of 32 at the end (wich is max supported by SATA spec).