Quote Originally Posted by alfaunits View Post
Can you upload a sample somewhat large file for me? (128KB or more)
I'll do that later today.

Quote Originally Posted by alfaunits View Post
0 fill on OCZ drives, IIRC, is equal to a TRIM. (or was it 0xFF?)
IMO, 0-fill blocks would not get written to the SF drive at all, hence, there would be no NAND throttling from it, whether you write 500MB of 0-fill or 500GB of 0-fill.
That was the case using the Indilinx controller, not the SandForce.

The SF controller handles 0-Fill or any other single value the same way, no doubt it would lead to exceptional "compression".

Quote Originally Posted by alfaunits View Post
Anvil, if you dedicate a separate thread for random generator, the CPU should be able to provide ample speed to write completely random data to the drive without any waiting in between (giving GC no time to react)
Got too much on my plate right now but we might see such a thread

Quote Originally Posted by johnw View Post
As long as you are testing it, I suggest concatenating a bunch of the smaller files into one larger file before compressing it. Just to prove that the data is not repeating on the file level (not that I doubt you
I've done a few tests using alternating 4MB buffers, will test that on the Areca later today.
I tested the original model last night and
100% incompressible data started at 1.5GB/s , it slowed down due to the array I was testing on, just a few drives but it shows the potential.

100% incompressible data + no deduplication at the disk level
was down to 330-350MB/s, so it does make a rather large impact.

Both were tested on the 980X @ 4GHz.

Quote Originally Posted by Computurd View Post
big plus one here.

great job btw guys, anvil especially! I know you guys are dedicating a lot of time to this, and it is very interesting!
Thanks
There is room for more drives in this test