Here I experiment with raid 0 to see the impact on writes to nand with a 128K and 4K Stripe.
The Set Up
Frankenstein R0
Vertex 2 and Vertex 3
User capacity 74.5GiB
Test files size: 74.43 GiB (4MB block size)
Work load: 4K random full span 100% incompressible
Drive in steady state
Stripe: 128K SMART readings before starting:
Vertex 2
E9 41216
EA 44672
Vertex 3
E9 41600
EA 41552
SMART readings after:
Vertex 2
E9 42112
EA 44864
Vertex 3
E9 41923
EA 41726
Amount of data written = 279,135.94MiB + 76,216.32MiB = 355,352.26MiB/ 347.02GiB
Nand Writes to Vertex 2 = 896GiB
Nand Writes to Vertex 3 = 323GiB
Total writes to Nand = 1,219GiB
Host Writes to Vertex 2 = 192GiB
Host Writes to Vertex 3 = 174GiB
Total writes to Host = 366GiB
Ratio between nand writes and host writes = 1:0.30
Speed at end of run = 10.03MB/s
Stripe: 4K SMART readings before starting:
Vertex 2
E9 42,112
EA 44,864
Vertex 3
E9 41,936
EA 41,734
SMART readings after:
Vertex 2
E9 43072
EA 44992
Vertex 3
E9 42,252
EA 41,903
Amount of data written = 268,862 MiB + 76,216.32MiB = 345,078.32MiB/ 336.99GiB
Nand Writes to Vertex 2 = 960GiB
Nand Writes to Vertex 3 = 316GiB
Total writes to Nand = 1,276GiB
Host Writes to Vertex 2 = 128GiB
Host Writes to Vertex 3 = 169GiB
Total writes to Host = 297GiB
Ratio between nand writes and host writes = 1:0.23
Speed at end of run = 12.75MiB/s
Stripe: 128K SMART 4k random full span. 50% OP - readings before starting:
Vertex 2
E9 44,096
EA 45,760
Vertex 3
E9 43,131
EA 42,670
SMART readings after:
Vertex 2
E9 44,224
EA 45,888
Vertex 3
E9 43,268
EA 42,789
Amount of data written = 206,573.44MiB + 3,7785.6MiB = 238.63GiB
Nand Writes to Vertex 2 = 128GiB
Nand Writes to Vertex 3 = 137GiB
Total writes to Nand = 265GiB
Host Writes to Vertex 2 = 128GiB
Host Writes to Vertex 3 = 119GiB
Total writes to Host = 247GiB
Ratio between nand writes and host writes = 1: 0.93
Speed at start of run = 48.95MB/s
Speed at end of run = 37.71MB/s
ASU Endurance App (Client not enterprise workload) Drives SE’d. Stripe: 128K SMART readings before starting:
Vertex 2
E9 43,392
EA 45,248
Vertex 3
E9 42,529
EA 42,170
SMART readings after:
Vertex 2
E9 43,648
EA 45,440
Vertex 3
E9 42,717
EA 42350
Amount of data written = MiB = 356.13GiB
Nand Writes to Vertex 2 = 256GiB
Nand Writes to Vertex 3 = 188GiB
Total writes to Nand = 444GiB
Host Writes to Vertex 2 = 192GiB
Host Writes to Vertex 3 = 180GiB
Total writes to Host = 372GiB
Ratio between nand writes and host writes = 1:0.84
Speed at end of run = 66.34 MiB/s
Single Drive - Vertex 2 4k random full span. 14.65GB user space, 22.62GB OP- readings before starting:
E9 44,224
EA 45,888
SMART readings after:
E9 44,480
EA 46,080
Amount of data written = 206,555.08MiB + 14,899.2MiB = 216.26GiB
Nand Writes to Vertex 2 = 256GiB
Host Writes to Vertex 2 = 192GiB
Ratio between nand writes and host writes = 1:0.75
Speed at end of run = 31.12 MB/s
Single Drive Vertex 3 4k random full span no OP- readings before starting:
E9 43,268
EA 42,789
SMART readings after:
E9 44,010
EA 42,926
Amount of data written = 82,850.78MiB + 57139.20MiB = 136.70GiB
Nand Writes to Vertex 2 = 742GiB
Host Writes to Vertex 2 = 137GiB
Ratio between nand writes and host writes = 1:0.18
Speed at end of run = 31.12 MB/s
Summary
TRIM is not effective if you have a 4K random full span work load.
Over provisioning had a significant impact on reducing wear whilst at the same time increasing write speeds
With a R0 4K array it took around 431GB of writes for the drive to get to a steady state when running the ASU client workload. Write speeds progressively fell by 26%. (See post #20)
When running the 4K random write work load, write speeds dropped by 50% after 2 minutes and started to fluctuate. After a further 4 minutes writes speeds dropped by 50% again and fluctuated more. (See post #42)
Read speeds were significantly impacted when the drive was in a degraded state. (134MB/s on a R0 array) Steady state performance came out at around 290MB’s for reads.
Bookmarks