i asked at another forum about how QD is assigned and received this answer...emphasis added

the queue depth is not "assigned" in any way, it's just how many commands are outstanding on your storage subsystem
meaning -> the more random IO -> the higher the queue
the slower your Storage -> the higher your queue
now you got a ssd which is fast as hell on random IOs (because of low access times) and a workload which is very light -> so you got no queued commands for your storage because your storage is not the bottleneck
some ssd benchmarks do even use a queue depth of 64, take a look at the AS SSD Benchmark
this is making more sense to me tbh. during my own testing i have come to this conclusion as well...with my HDD caviar black during gaming and different usages my QD goes much higher than with ssd. i think this is where you can see where there is benefits to raid and SSD...it keeps the QD low because there is such high throughput at the lower QD, tons of channels, etc.
for instance, at a QD of 1 a raid array will have the throughput of a single ssd at a queue depth of 32. so the queue depth doesnt go higher, thus lower access times...
for instance sequential read at QD of 1 with my controller...
4k- 136
8k- 200
16k 355
32k- 519
64k- 692

that is at a QD of one, and helps explain why my QD never goes over two on my array. on a caviar black during same usage it goes 6+