24.41TB
4 Reallocated sectors, (and is it sectors or is it really a page)
Media Wearout is at 85
and FFFF (E2-E4) returned sometime last night, will have to run smartmontools to clear it up.
Attachment 114616
Printable View
24.41TB
4 Reallocated sectors, (and is it sectors or is it really a page)
Media Wearout is at 85
and FFFF (E2-E4) returned sometime last night, will have to run smartmontools to clear it up.
Attachment 114616
My drive matched Anvil's drive wear out level, One Herts's drive has written 5TB more at no wear out cost as it stands, Ao1's drive has surprised me by wearing out rapidly after a point. Th Sandforce drive seems interesting to look at.
SSDLife reassesses the situation. :)
Delta between most-worn and least-worn Flash blocks: 7
Approximate SSD life Remaining: 96%
Number of bytes written to SSD: 18432 GB
Don't forget that the SF started off writing highly compressible data. 4TB or 5TB worth.
It will be interesting to see how linear the wear is with uncompressible data.
Is wear levelling with SF is 100% dependent on how compressible the data is?
Intel drives don't have that advantage, so they must rely on different techniques. The fact that the 320 is lasting longer (so far) than the X25-V with NAND that has ~40% less PE cycles is quite remarkable. It is able to write faster and last longer with less PE.
Assuming that the only wear levelling technique is compression the SF drive should start to rapidly deteriorate, but we will soon see. :)
I've been thinking about why read speeds get throttled with SF drives. My guess is that the channel that restricts write speeds deals with 2 way traffic. You can't slow down writes without slowing down reads at the same time.
"If" that is the case it is not that sophisticated, as there should be no reason to slow down read speeds.
Throttling read speeds and the poor implementation of TRIM seem to be the Achilles heel of an otherwise great technology.
It would be interesting to see what Intel could do with the SF controller.
From the leaked Intel SSD roadmap slides we can expect to see an Intel SSD with SF controller pretty soon ( Q4 2011 ) !
http://thessdreview.com/latest-buzz/...ce-capacities/
Unfortunately it is looking that way.
I want to be wrong on this too but likely the deviation will not be too far out.Quote:
Assuming that the only wear levelling technique is compression the SF drive should start to rapidly deteriorate, but we will soon see. :)
The constraint is quite understandable if it's compressed otherwise I say very poor software implementation. I think that even with compressed data read speeds should not have been affected but at the end the trade-offs and decision making favoured write speeds and that is what we see. Sandforce may have improved the Vertex 3 on this regard. We can't be sure unless we test it.Quote:
I've been thinking about why read speeds get throttled with SF drives. My guess is that the channel that restricts write speeds deals with 2 way traffic. You can't slow down writes without slowing down reads at the same time.
"If" that is the case it is not that sophisticated, as there should be no reason to slow down read speeds.
Throttling read speeds and the poor implementation of TRIM seem to be the Achilles heel of an otherwise great technology.
Not sure if Intel would do the same trade-off.Quote:
It would be interesting to see what Intel could do with the SF controller.
all 25nm devices will last longer than the previous gen counterparts. And the next generation will go even further. When i was speaking with the head of R&D for LSI we were discussing flash in general (started with a WarpDrive discussion). Right now his stance is that the market doesnt need to be doing any more shrinks, there are so many gains to be made with the current generation of nand. The place where the increases are by far the greatest is in controller technology, with the coupling of current gen tech, you could be looking at 50 percent more durability from the current gen of controllers. Kind of like the quad core v six core debate. why keep going further if you arent even using what you have?Quote:
The fact that the 320 is lasting longer (so far) than the X25-V with NAND that has ~40% less PE cycles is quite remarkable. It is able to write faster and last longer with less PE.
no surprise that these controllers with 25nm nand are more durable than the previous generation. That is the whole purpose of this evolution of SSD. Alot of people are thinking the sky is falling with lower PE ratios, but the real true fact is that the situation is going the exact opposite direction. The endurance and durability and reliability is going upwards at a phenomenal rate. but of course that is what has been said all along, but there are always the chicken littles LOL
no coincidence that Intel is going with 25nm MLC for its next gen enterprise series. Not only has the performance jumped from the controller usage, but MLC in general is being managed in much better way, especially with revolutionary technology such as ClearNAND.
i will tell you this though, some industry insiders frown upon the transition from SLC to MLC, regardless of the maturation of the technology.
And your drive had been running 24/7 for about 1 year?
Mine has been running for 350+- hours, doesn't look like that matters even though this is a test and yours has been running "for real".
From what I've read, the SF controller doesn't tolerate high deltas on least/most worn "flash blocks", meaning that it starts shuffling static data when needed, don't know about other controllers, there may be some static wear-leveling but we'll probably never know.
Most data in real life are compressible, at least for the OS drive so one can easily add 30% or more to the final result of this test. (as long as it stays at incompressible data)
Testing with incompressible data is of course important but I it leaves a lot of questions to be answered.
As for Intel using the SF controller, at first I thought it was a bit far fetched but when the Marvell controller popped up in the 510 I didn't know what to think, so, IMHO, it's not impossible.
I do think for that to happen Intel probably would wan't to write their own firmware. (like Crucial did for the Marvell controller)
There are some "bugs" or side effects or what ever one wants to call it that would never have been tolerated had it been an Intel SSD.
Still, I really enjoy the SF drives, I've had the same level of maintenance with these drives as with other SSD's, nothing more nothing less.
There has been quite a few fw updates on the SF drives but personally I've never been waiting for some fix, that I know of.
There is that TRIM bug that never got fixed on the SF-1XXX series, that's about it.
@CT
Without the shrinking we would never get to 1TB 2.5" SSD's and prices would leave most people out in the cold.
There is of course some truth in that things are happening too fast, 20nm NAND was already on the table (at least in the headlines) before 25nm drives were available.
edit
updated graph in the post #1
yes, of course that is THE major driver. I guess we were speaking to performance/endurance more than anything. there is just so much left on the table. TBH the scaling on the controllers on the ssd themselves is actually not that great. they cant really handle the entire throughput of the nand at the end of each channel. of course this will be lessened somewhat with the newer generations, simply because of fewer channels.Quote:
@CT
Without the shrinking we would never get to 1TB 2.5" SSD's and prices would leave most people out in the cold.
There is of course some truth in that things are happening too fast, 20nm NAND was already on the table (at least in the headlines) before 25nm drives were available.
EDIT: this does create an interesting situation with certain models of ssd, say the maxiops..a new gen controller strapped on 'old' nand that has higher PE ratio...since it is optimized and much more efficient (for use with 25nm) shouldnt the maxiops line have some really awesome endurance?
I'm sure Intel would use their own fw as they did with the 510 Marvell controller. The time Intel take to test would mean it would either be an older SF controller or a specially developed controller for Intel.
I've been impressed with the SF drive. I would have liked to play with it a bit before destroying it. :ROTF: For sure though a bit of Intel finesse would not hurt to slick things up.
Anyway back on topic, I have emailed you an excel sheet with a few more stats thrown in. Just a thought, no worries if you don't want to use it.
They need to keep shrinking to get costs down. The speeds are more than enough for most users. The endurance is more than enough for most users. The price is way too high for most users.
31TB/85% as of 6 hours ago. I just picked up my 320GB mlc iodrive so I will be playing with that soon :cool:
I'll send you the new version where random data is part of the loop. (within 10 minutes)
I'm just making the last few checks.
We just need to agree on the runtime length (it's configured in ms)
I think that's all we need to do for now, it adds by default a 1GB file for random writes.
25.72TB host writes
No changes to MWI
Attachment 114623
edit:
+ chart updated...
Delta between most-worn and least-worn Flash blocks: 8
Approximate SSD life Remaining: 95%
Number of bytes written to SSD: 21,376 GB
27.06TB Host Writes
Media Wear Out 84%
Re-allocated sectors unchanged (4)
35TB. 82%. Still 1 reallocated sector. I will switch the software to the new version this evening.
Delta between most-worn and least-worn Flash blocks: 9
Approximate SSD life Remaining: 94%
Number of bytes written to SSD: 22,272 GB
EDIT: Guys are you making a switch at 35TB? What settings?
Just got the time to look at the file randomness. Anvil, can you run a CPU benchmark of whatever generates that random file?
That's the reason I asked for it, it looks like a hash of random bits rather than random bits.
The reason I ask is if this can't do something like 500MB/s for generation, and it's done in the same thread as the writing, you are essentially doing sequential transfers mostly :( (the overall "write" speed would be an indicator)
Regardless of how random the file is (compression-wise) or your internal file I/O, it seems the overall I/O is mostly sequential.
Though I am not sure I understand any more what you guys are trying to achieve, but with such small random I/O, it is hardly real-world.
actualy to save on CPU usage so that there is no lag, all random data should be pregenerated.
In what regard?
Not sure that I follow.
There isn't much random IO in real life but there is some and thats why we are adding a small portion of random I/O. (on top of small file I/O)
I have 3 alternating buffers so It's generally not an issue.
1.5GB/s was what I measured using a small array on the Areca 1880.
Well, let's say (just for the explanation) that your random generator can produce 100MB/s. X25-M can also do 100MB/s of sequential.
As a result, since (if I understood you correctly) they are done in the same thread, without backbuffering, the resuling write speed would be 50MB/s.
If you get that overall speed, then X25-M is writing sequential data and not random I/O.
It looks like you generate the random data in a separate thread then, so I misunderstood. I don't see how the entire system (as in the CPU/memory/PCI-e even) would be able to sustain those speeds.Quote:
I have 3 alternating buffers so It's generally not an issue.
1.5GB/s was what I measured using a small array on the Areca 1880.
What happens past the write instruction issued at the device level? Will the SSD controller not try to fill a complete block whenever possible to minimise read/ relocate/ write penalties?
Presumably it would try to avoid 4K writes being scattered across the span of the drive. Isn't that what Intel refer to ask write combining?
Would it also try to rotate locations of writes for wear levelling?
Just asking as I don't really know :up:
@alfaunits
Whether the random data generator is producing 10MB/s or 100MB/s has nothing to do with writing randomly or not.
The Areca can sustain those speeds until the cache is filled or as long as the array can sustain that speed, I was using a small array incapable of coping with that speed and thus it wasn't sustained. (it lasted for a few seconds)
The easy test is just to generate the random data without writing the data, that would tell the potential of the random "generator".
Anyways this is out of topic, a separate thread would be needed and I haven't got the time right now.
@Ao1
We can only presume but that's the general idea of write combining.
If the file spanned the whole drive everything would be random per say, what we have been doing so far is just producing writes (both small and large files) but no random IO within those files.
Random writes withiin a file overwrites data in stead of just writing a new file, there is a huge difference between those two scenarios.
edit
Link to Anand
I didn't think that any of the G2 drives could do static data rotation, although I have heard talk of it. For sure the X25-M can't do it, maybe the 320 can however.
12GB of data on the SF drive has only been written to once. If that 12GB of NAND could be swapped as the rest wears down it would extend the life quite a bit.
Without static data rotation the MWI will get to 0, but there will still be 12GB of unscathed NAND. That is a likely real life scenario as well.
I think the Intel (and most others) do static data rotation, I can't see how it would work if they didn't.
SandForce static data rotation
It does require idle periods though :)
If you are doing it in the same thread as the actual writing it does, because the writing needs to wait on the generator.
I did not even take Areca into consideration here, but just the bare CPU, memory and PCI-e link. If you do it in the same thread and you can send 1.5GB/s to Areca (does not matter what Areca does with it, it can ditch them even), that means the combined speed of generator and PCI-e link is over 6GB/s total (3GB/s each on average, so per second they can write at half of the average = 1.5GB/s). You have to agree that's... too much?Quote:
The Areca can sustain those speeds until the cache is filled or as long as the array can sustain that speed, I was using a small array incapable of coping with that speed and thus it wasn't sustained. (it lasted for a few seconds)
The easy test is just to generate the random data without writing the data, that would tell the potential of the random "generator".
Are you generating the random bits in the same thread that does the writes, or is there a separate thread for them in the background? (I suggested a back thread already, you said you have no time, so I presume it's all done in the same thread).
I know you have "grr" mind reading my posts, but I am not trying to play smart - if the above numbers and thread assumption are correct, something is too fishy and the entire test is skewed.
Why are you always making assumptions, loaded with negativity?
(that's how I'm reading most of them unfortunately, I may be alone in making this conclusion, I don't know?)
To answer some of your "questions"
1.5GB/ was using threads
The Endurance test is not using multiple "threads"
The random data is currently being pregenerated using 3 buffers...
Now, can we continoue this thread, which is about Endurance testing, not about making assumptions :)
Because I question illogical things. I want to know if you're doing a useful thing here or just wasting several SSDs with sequential transfers. And I know I'm not alone in that questioning.
GNU RNGs can barely do tens of MB/s, hardware RNGs don't do >500MB/s, so yes I am quite suspicious of the claim that without threads you get RNG performance that is fast enough compared to SSD speed to think it does not affect the overall "write" speed, i.e. not making this just sequential transfers overall.
To make a test, you have to assume it's valid. This looks invalid. But it's your money dumped on an SSD. If you do reach close to 200TB for the 40GB X25-M you'll just prove me right here :(
And if I am completely wrong, my sincere apologies. I don't pick random fights with people who have something to learn from, and you are one of them. Please consider it constructive criticism or my learnign curve.
I don't think you've been reading this thread.
The random generator test I did on the Areca wasn't being generated real-time, I never claimed it was, it's pregenerated.
The test on the Areca has nothing to do with this test, it was in a completely different manner but still using the same pregenerated formula.
If you are a programmer, write yourself a simple function that looks like this.
declare a buffer of 4MB and fill the buffer using
for I = low to high do
buffer[I] = random(255)
Write the buffer to a file, it will be incompressible.
The data written to the Intels are just filled with "garbage" from the stack, they don't do compression so why spend cpu power on them.
The speeds so far at writing is at the staggering rate of 30-50MB/s.
This is so simple that I can't believe you are questioning the test, it could have been done using a simple batch file just copying files and getting the same results.
Well, it's all there, it can be monitored using any tool.
29.2TB Host writes
Media wear out 83
Attachment 114655
Delta between most-worn and least-worn Flash blocks: 10
Approximate SSD life Remaining: 93
Number of bytes written to SSD: 24,704 GB
Later I will remove the static data, let the app run for a bit and then I will put the static data back on. That should ensure that blocks are rotated.
The mother of all TRIMs. 20 seconds. That is based on no static data running Anvils app with only 1GB free....so it's a delete for the full span of the drive.
Now I will put back the static data and let it run normally. Hopefully this will help slow down the delta between most-worn and least-worn flash blocks.
I had to guess a few of the inputs (especially for one_hertz) but even if you take out the highly compressed data the SF drive worked with for the first ~5TB it is still doing really well so far, especially considering data is now uncompressible with no let up for static data rotation.
MWI = wear out/
R/Sect = relocated sectors
Here are a couple of charts that show the impact of throttling based on file size and level of compressibility of data.
Obviously this is worst case thottled state, but both sequential read and write speeds are hit quite hard with uncompressible data. Highly compressible data on the other hand is unaffected.
Those are very nice charts Ao1
The SF compression chart is just like I figured it would be :), quite interesting as the drive is being throttled as well.
80%, 2 reallocated sectors, 37.5TB. Switching to the new software.
Intel have posted a link to a very interesting white paper:
http://www.usenix.org/event/hotstora...pers/Mohan.pdf
It talks about the impact of the frequency of writes.
Longer recovery periods between writes can significantly boost endurance, allowing blocks to potentially undergo several millions of P/E cycles before reaching the endurance limit.:eek2:
(Its not going to happen in our test) ;)
We didn't agree on the random part, it's set to 1000ms per loop by default.
1000ms = 20-30MB? on the Intel's, is that something we can agree on or do we wan't more?
--
I've finally reached 30TB + :)
30.18TB Host writes
Media Wear out 82
Re-Allocations still at 4
@Ao1
Interesting link, will do some reading.
Here is some more info :)
P/E specs are based on the minimum.
http://www.jedec.org/sites/default/f...JESD47H-01.pdf
Delta between most-worn and least-worn Flash blocks: 11
Approximate SSD life Remaining: 92%
Number of bytes written to SSD: 26,432 GB
Really nice to see Intel g3 with 25nm nand beat g2 with 34nm nand! !
Delta between most-worn and least-worn Flash blocks: 12
Approximate SSD life Remaining: 91%
Number of bytes written to SSD: 28,352 GB
Edit:
Another interesting snipet from the Intel forum:
"Read disturb refers to the property of Nand that the more a block of Nand is read, the more errors are introduced. A brief note about Read Disturb (and other various Nand properties) are discussed in this technical note from Micron:
http://download.micron.com/pdf/techn...and/tn2917.pdf
Static data that is read frequently will eventually need to be refreshed or relocated before it reaches the Read Disturb limit because of this Read Disturb property."
A lot of good reading in all those links!
32.02TB host writes (just seconds ago)
MWI 81
nothing else has changed.
Attachment 114673
I had to reboot to apply Window updates and now it seems that DuraClass has really kicked in on uncompressible data. :mad:
I tried a secure erase from the OCZ toolbox, but it has not helped. When I tried to copy the static data back (mp3's) I could only get around 10MB/s. (Ended up at 8.55MB/s)
The endurance app is currently running at 4.13MB/s :shakes:
Dang, looks like its game over for me unless I can get the MB's back up.
I'd let it idle a while. (a few hours)
Pretty strange that it didn't slow down until the reboot?
I also had a reboot today (had been running for 12days ++ without rebooting) and the speed picked up the first initial loops, not much, will check a bit later.
It might have occured when the app stopped running or it might have been the reboot. Currently the app is running at 6.52MB/s.
I'm going to do another secure erase, but then I will only run the app with no static data.
If that does not work I will leave it on idle.
Sounds like an idea.
If it needs idling it could take days, interesting turn anyways. If full throttling means ~10MB/s it's pretty much "disabled".
A secure erase and then just running the app did not work. I'll let it idle now to see what happens. :rolleyes:
A couple of hours idle and now I'm back to normal....well actually a bit faster than normal.
Great!
Lets see how this develops.
edit:
Did you have a look at the smart values in the OCZ toolbox?
Popped out, came back and by the 3rd loop I'm down to 6.78MB/s.
So is this now a throttled state? Was I previously in a used state?
I'd say the speed you were getting earlier was close to normal used state, the one you've got now has to be throttled++
I'd leave it idling until tomorrow and give it a another try, it's clearly telling "something" is up.
Hmm, so an app like AS SSD writes directly to the device, avoiding Window's file system. In such as case would a TRIM command be issued if Window's did not register the write? (It would explain a few things if a TRIM command could not be generated).
so what is the best way to keep an ssd from degradation? 3rd party maintenance programs? im considering a 64 gb c300 but with all the things im reading im not so sure i want to get a ssd. i mean page file useage,trim,garbage collection,degradation and failure rates...i just dont really see the point..i use my pc for gaming and i think a raid setup with 6gb hdd is the way to go
Here is a comparison of a super throttled state between various levels of compression.
The last copy of static data dropped down to 7.98MB/s
Why do you say that? I don't think AS-SSD "avoids" the filesystem. Otherwise, it would corrupt the filesystem by writing to LBAs that the filesystem may be using. And AS-SSD has never corrupted the filesystem on any drive I have run it on.
I think what it does is open a file of a certain size (say, 1 GB) so that it has a range of space to play with, and then limits itself to writing only in that space. And it apparently deletes the file when it is done. I never looked in the Trash after running it to see if there is a deleted file in the trash. If there is, then TRIM would definitely work when you empty the trash.
Benchmarks generate temp files that auto delete. When that happens a TRIM command is issued.
I'm assuming you can write at the device level based on the fact that you can read from it directly. I'm not 100% sure, which is why I asked the question. I'm even more unsure about what level the TRIM command comes from.
It might help explain why a TRIM "hang" does not occur whilst the V2 is being benchmarked.
EDIT - I'm going to terminate the V2 endurance test. Seems pointless if DuraClass is going to throttle the drive to make sure it lasts 5 years.
A real shame about the throttling but we knew it would pop up, it's showed it's face a little later than I expected.
updated the chart in post #1
The Intel (Kingston) is still going strong...
Well it was expected, but I didn't think performance would drop so low.
I was getting huge response times for uncompressible data. 87ms for a 4MB sequential xfer/ 11.37 IOPS. For writes 570ms/ 1.75 IOPS
I don't really get why Duraclass is needed when the drive benefits from compression. :down:
Q: How many IOPS does your ssd put out?
A: 1
Classic.
Ao1, 30TB written in how much time?
7 days exactly. EDIT: Looking at my first post within an hour of 7 days.
43TB. 78%. I started the new software which includes some random writes 1TB ago.
It will be interesting to see if/ how much that speeds up wear.
You started on the 16th? 13 days now?
Ao1, can I ask what program you use to generate your charts? They are nice! And thanks for this adventure! I have the Intel G2 160GB, and love it. Currently, I've written ~ 3.9 TB over six months (3287 hours) and have a MWI of 98. I download quite a bit of torrent files, so I always have my drive filling up with media files before I move them to 2TB Samsung storage drives. I wish your efforts in this would get better exposure on other sites. To my knowledge, you are the first to attempt this. Thanks!
WHOA! i didnt look for a whole day and the V2 is outta here, didnt see that coming at all. man unbelievable the lack of performance once fully throttled. now i understand why my friend says his V2 is as slow as a HDD after his long usage (bought it first week it was available)
lol that literally made me LOL until i cried :)Quote:
Q: How many IOPS does your ssd put out?
A: 1
Classic.
no no no. HDD=1 SSD=1 SF=big fat 0Quote:
Seems like again HDD 1 SSD 0 !
for the record though guys...i have a Vertex LE in this here 24/7 rig. bought it forever ago...i have never sanitary erased it or anything. havent even changed the firmware (it is on the first version) basically i have run it maintenance free...however, it is still very very quick. it seems great to me.
we do need to keep in perspective the parameters of the test that is being ran here, intentional hammering is very bad for that gen of SF i guess.
34.68TB Host writes
MWI 80
no other changes
Attachment 114705
37.34TB Host writes
MWI 78 (just minutes ago, so down 2 from last report)
no other changes
Attachment 114745
Anvil are you now running the more onerous version of the app?
(had to guess a few of the stats for one_hertz below).
Thanks for the updated chart!
No, not yet, was planning to start using it at 35TB (20GB*365*5) but as One_Hertz didn't start using it till > 40TB I've decided to go on a little longer.
I might make the switch tonight, will be moving it to another computer where it will be running as a spare drive from then on.
49TB. 75%. The random writes aren't speeding anything up it seems.
Are you using the default settings for random writes, 1000ms per loop?
How much random writes have been generated so far?
edit:
updated the chart in post #1, that 320 is really pulling away from the G2, impressive.
~3GB so far I think. I am thinking of increasing the amount of random writes...
By the way, I got a write error in the log yesterday... Not sure if it was a fluke or not.
OK, you could try increasing random writes to 3000ms or maybe 5000ms, that would'nt be too excessive imho.
Never seen the error, shouldn't really happen, it just means that WriteFile reported false for 1 write op.
I'm not logging the error # but if it happens again I might change to catch the error#.
40.31TB Host writes
MWI 77
No other changes.
Still running off the same computer, will try moving it tonight.
I really appreciate your efforts BUT if your experiments show the drives lasting for quite a long time under so much stress THEN why are only SLC drives recommended for servers and intense write I/O applications ?
Maybe this is not random enough with the file placement ? I think the SF drive somehow converts random files into sequential after some time but now that it is out of the race it really does not matter anymore etc.
enterprise usage would be to remain under loads such as these, 24/7, for years
About to move the drive.
Current status is
41.15TB
MWI 76
Attachment 114782
Looks like the move was successful.
It's sitting in my only AMD system, ASUS CHIV, an SB850 based MB.
As a non-OS drive it is writing a bit more per hour, right now it looks like it's increasing from ~2.6TB to ~3.0-3.1TB per day.
I've switched to using the version that produces random writes and I've set it to write random IO for 2500ms per loop.
updated the chart as well.
edit:
Increased the random IO part to 3500ms per loop, might have to give it some more, will be monitoring.
Does somebody knows what should happen with the drive when wear level hits 100%? Will it refuse to work or it will work until all spare cells are dead?
I agree. The ssd will definitely not stop writing at that point. should be after some time after that threshold is reached of course, but the requirements for it to get to that level are intentionally low. they have given themselves tons of play (for sure with the intels)Quote:
o you mean the MWI (media wear indicator) SMART attribute? Which decreases to 1 (not increases to 100).
If so, that is just a guideline. The SSD will likely continue to work for some time after MWI hits 1.
A quick report
12hours generated ~1.48TB meaning that it was a bit optimistic when it started, still it's looking like I'll get 2-300GB extra per day.
In that same 12 hour period it generated ~1.86GB of random 4K writes.
I'm going to let it run for 24h and then I'll decide what to do with random writes.
42.95TB Host writes
MWI 75
56TB. 71. I've put the random writes loop length to 5000ms.
Updated the chart (post #1)
43.81TB Host writes
MWI 75
~4GB of random writes in ~19.5hours, I might end up with 5000ms as well.