I've asked the same question many times. It can be best illustrated by looking at Indilinx Barefoot drives.
You have the 32GB unit that can do roughly 200MB/s read and 100MB/s write, 15K IOPS read and 3K IOPS write, and only has 4 (or 8??) NAND chips on flash 4 channels, and the 256GB unit, wich COULD do 8 times the bandwidth by simply running 8 times the memory chips in parallell, only does 250MB/s read, 180MB/s write, 15K IOPS read, 3K IOPS write.
You can take 8x Barefoot 32GB in RAID-0 off a HBA like LSI 9211 and get roughlly 1400-1500MB/s read, 600-700MB/s write, 120K IOPS read, 24K IOPS write. This makes me think the higher capacity models of Indilinx Barefoot are a complete waste of flash modules, and it's clear the bottleneck is the SSD controller (maybe also the SATA 3Gbps interface for sequential reads on the higher capacities).
You could make a simple SATA 6Gbps SSD with 3x 32GB Barefoot drives and a cheap ROC (RAID-on-Chip) and use internal RAID-0 to get 550MB/s read, 250-300MB/s write, 45K IOPS read, 9K IOPS write, and 96GB capacity.
The same setup with 4x 32GB barefoot would perform 600MB/s read (or whatever the practical limit is), 350-400MB/s write, 60K IOPS read, 12K IOPS write, and have 128GB capacity. Such an SSD would wipe the floor with SandForce 100GB, C300 128GB, and x25-M 160GB, at least with regards to bandwidth and value. Even if you had to use a $100 ROC chip, you could still come in at $500.

Another interresting possibility is 3x x25-V in the same type of setup. ca 550MB/s read, 120-130MB/s write, 90K IOPS read, 30K IOPS write, 120GB capacity, $400 ROC included.

What would be awesome would be a native PCIe SSD with simelar caracteristics as SandForce SF-1500/1200 50/100GB minus the RAISE™, just with 3-4 controllers in internal RAID on the board. Files compressable to to ~20% of original size could be read and written at 2-3GB/s+... Not to mention random write about 90-100K IOPS (4K alligned).