I'm no Shill for SF and SF's marketing, but I'd rather have a universally accepted SNIA spec than a fractured comittee. If you're looking at writes by volume, most are small compressible writes anyway. If you look at it by size, the larger the write, the less likely it's compressible. On average, I think the 46 to 67 percent encapsulates an average client workload. The host system is always writing small bits of data, while larger accesses are usually initiated by the user.
I'd like to see more transfer size and access patterns vs. data compressibility. The larger the transfer, the less random and less compressible it becomes.
If SNIA has any chance of getting the acceptance it deserves on client side storage, some concessions to SF will have to be made. But the more time I spend trying to understand SF, the more I come to terms with that. The truth is, the best SF drives are extremely competitive on speed. Latency will be a problem in some cases, but you do get some advantage even with 80% compressible data. After 46% SF really plateaus, but there is still an advantage there. Now, my time futzing with SF leads me to believe that there is more overhead than is user visible, perhaps enough to overcome the compression endurance advantage, but it could also be a case where more over provisioning would pay dividends as I've maintained for some time.
Now, this is entirely separate from steady state performance, but the more compressible the data, the longer the time to achieve steady state. I'll have to play around some more when I get home, but I also believe some of the housekeeping algorithms in 5.0 reference FW are different, but steady state performance will continue to be an area where improvement is needed. But some SF do have some generally desirable attributes above and beyond the obvious, like stellar 4k qd1 performance. That's not a SF exclusive trait, but it's one area of performance I prize.