I'm not impatient
It's a slow process with my drive, much slower than yours

Some more static data wouldn't hurt though.
Too much random writes is whats unrealistic, desktop users don't do much random writes "the IOmeter way" at all.
The small files we are creating is what is normal "random" IO, the difference is that we are deleting them over and over again and so the LBAs are all "cleaned" for each loop.

The random part I'm thinking about including in the loop would never (never or very seldom) be deleted and that would put more strain on the process, too much would be unrealistic though so maybe 500MB is a bit to much, will have do do some math on it
Well, I've already had a brief look at it and on my drive 500MB per loop would mean about 85-90GB per day of random writes, that's quite a bit of random writes.
If the random write limit was 7.5TB it would mean 85 days and we've already used quite a bit of that reserve I'd expect.

Make no mistake, this is not going to be done in a few weeks, the site I originally linked to struggled for a year , if we introduce a bit of random writes we might be done within a few months.

When I reach the 35TB limit (or so) I'll be moving it to a different computer (a home server) where it can just sit until it's done, having that extra computer is what makes this possible.