Page 17 of 24 FirstFirst ... 714151617181920 ... LastLast
Results 401 to 425 of 598

Thread: Sandforce Life Time Throttling

  1. #401
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Dragoonseal View Post
    You guys are like the A-Team or MacGyvers of SSD testers, reverse engineering top secret classified information about controllers using paper clips, ballpoint pens, rubber bands, tweezers, nasal spray, and turkey basters!
    No, I am not aware of any reverse engineering going on here. That term almost always refers to analyzing something in order to figure out secrets of how it works SO THAT YOU CAN DESIGN YOUR OWN COPY using the principles from the device being analyzed. I seriously doubt anyone here wants to know how to design their own SSD throttling system.

    All that is being done here is to try to find out the operational parameters of the device so that its performance and limitations are known. That is basic information that OCZ should have included in the product datasheet. It is not secret information, just basic functionality that OCZ has negligently failed to document.

  2. #402
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by Ao1 View Post
    Here is a comparison of write speeds using different levels of compression rates.

    For sequential xfres there are three main bands

    • 0% to 8% Compression = ~100% of max speed
    • 25% to 46% = Compression = ~45% of max speed
    • 67% to 100% Compression = ~30% of max speed

    4K QD1
    Performance approximately the same regardless of compression ratio

    4K QD 4 & QD 16
    Compressible 4K xfers are assisted with queue depth.

    Attachment 118473

    Attachment 118474

    Attachment 118475
    EDIT: I added an X25-M just to help show where compression might be helping.
    Based on those speeds vs. mine (and how bad V2 with Hynix 32nm is) and based on how my compression/WA figures are almost exactly 2x what has been calculated and corroborated by you and sergiu, I think this custom baked firmware merely is reporting 2x the actual NAND writes.

    You have a 34nm 40GB, which is 85-93% as fast as a 34nm 120GB. In those two Anand benchmarks shown, the Hynix 32nm 120GB is 75-80% as fast as the IMFT 34nm 120GB and in his old tests the 60GB was only ~95% as fast as the 120GB (for 34nm varieties, at least), so there is logic with your drive being 10-20% faster than mine with sequential writes at 46% and 101% settings.

    EDIT: realized yours are Vertex 3 write speeds...my analysis on my drive being slow compared to your 40GB V2 is out the window
    Last edited by Vapor; 07-31-2011 at 04:27 PM. Reason: edit

  3. #403
    Registered User
    Join Date
    Nov 2010
    Posts
    24
    Quote Originally Posted by johnw View Post
    No, I am not aware of any reverse engineering going on here. That term almost always refers to analyzing something in order to figure out secrets of how it works SO THAT YOU CAN DESIGN YOUR OWN COPY using the principles from the device being analyzed. I seriously doubt anyone here wants to know how to design their own SSD throttling system.

    All that is being done here is to try to find out the operational parameters of the device so that its performance and limitations are known. That is basic information that OCZ should have included in the product datasheet. It is not secret information, just basic functionality that OCZ has negligently failed to document.
    How you could take my obvious jest as such a serious statement I will never know. Direct your GRR ANGER at someone else please.

    SandForce SSDs have so many mismarketed or outright omitted factoids and aspects that it is detestable. I wholeheartedly applaud threads like this for delving into them. Carry on!
    Lilim
    Intel Core i7 920 @4.2GHz
    EVGA x58 Micro - HAF 932 - Noctua NH-D14
    Dual SLI Nvidia GTX 480s - 6GB DDR3 1600MHz
    3x Intel X25-M G2 (80GB) SSD in RAID0

  4. #404
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Dragoonseal View Post
    How you could take my obvious jest as such a serious statement I will never know. Direct your GRR ANGER at someone else please.
    What anger? How could you take my simple clarifying post as angry?

  5. #405
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by johnw View Post
    What anger? How could you take my simple clarifying post as angry?
    I think it was the capital letters



    I'm fairly convinced that my firmware's counting of NAND writes is accounting for the 2x compression/WA figures. Here's an updated chart with 25% and 46% completed. 0-fill and 8% will likely be revisited (I'd like E9 to increase at least 3000 for each test) and the others haven't been attempted yet. When I do my own SF-2200 testing, I'll replace Ao1's numbers. The SF-2200 will not be entered in the endurance testing, I have a real use for it that doesn't involve killing it

    Click image for larger version. 

Name:	SFComp.png 
Views:	224 
Size:	38.5 KB 
ID:	118488

    Ao1, what's your plans with your V3 now that the LTT and compression mystery has been somewhat decoded? Longer idle periods or putting the drive to real usage?

  6. #406
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by Vapor View Post
    I think it was the capital letters
    Ah, should I have used bold instead? Just trying to emphasize that part of the definition!

  7. #407
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Vapor View Post
    Ao1, what's your plans with your V3 now that the LTT and compression mystery has been somewhat decoded? Longer idle periods or putting the drive to real usage?
    The take away for me on compression is that unless you are able to compress files between 0fill and 8% fill you are not going to see any benefit over other brand SSD's that don't use compression.

    For sequential writes the data needs to be compressible to between 25% & 46% to be as fast an X25-M. If compression is < 46% the V3 is slower than the X25-M.

    I think it is highly likely that the SF drives are configured for database applications and no doubt performance here is great, but for typical client use you might occasionally see faster performance, but you are more likely to see slower performance based on non compressible data speed specifications.

    I would like to test a 240GB drive now and a small drive using 4k random writes, full span.

    Click image for larger version. 

Name:	Untitled.png 
Views:	216 
Size:	105.2 KB 
ID:	118489
    Last edited by Ao1; 08-01-2011 at 12:24 AM.

  8. #408
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by Vapor View Post
    I'm fairly convinced that my firmware's counting of NAND writes is accounting for the 2x compression/WA figures.
    The real big difference here is your drive does not have DuraWrite activated. Maybe there is a good reason for DuraWrite to be activated that we have not yet worked out. There must be something more to it otherwise its hard to see why OCZ etc would activate it.

  9. #409
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Vapor View Post
    I'm fairly convinced that my firmware's counting of NAND writes is accounting for the 2x compression/WA figures. Here's an updated chart with 25% and 46% completed. 0-fill and 8% will likely be revisited (I'd like E9 to increase at least 3000 for each test) and the others haven't been attempted yet. When I do my own SF-2200 testing, I'll replace Ao1's numbers. The SF-2200 will not be entered in the endurance testing, I have a real use for it that doesn't involve killing it
    Data posted by Ao1 in http://www.xtremesystems.org/forums/...=1#post4917941 changes the things a little. Could you format the drive using NTFS with 8KiB cluster size and run some sequential writes? By default, NTFS has the cluster size setted to 4KB which would translate to Read 8KB page -> update 4KiB -> write 8KiB back. If this is the case, then with 8KiB cluster size and sequential writes, you should see similar compression rates like all other sandforce ssds. Also, please look with a program like Active Partition Manager for partition offset, because now it must be alligned with 8KiB

  10. #410
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Good observation Double the page size and double the block erase size.

  11. #411
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    Great findings!

    Alignment shouldn't be an issue unless the drive is partitioned as a spare drive (W7 OS partitions are aligned at 100MB by default)

    Could be that spare drives are aligned at 1MB so it's worth checking.

    As long as the drive is aligned there shouldn't be any major differences to compression whether it's 4 or 8 KiB page sizes or 512KB/1MB/2MB blocks.
    I'm pretty sure that SF was aware that those factors would change.

    edit:
    Looks like W7 aligns the second partition at 101MB (103424KB), could be worth looking at.

    edit2:
    OK, it could be that we need to run some more tests as spare drives are aligned at 1MB (1024KB) by default, so 2MB block SSD's aren't aligned.
    I'll run some tests on one of my drives that I'm pretty sure on being a non 34nm drive.

    So, why didn't MS align at 2048KB or 102400KB?
    Well lets see if it makes a difference.
    Last edited by Anvil; 08-01-2011 at 04:21 AM.
    -
    Hardware:

  12. #412
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Quote Originally Posted by Ao1 View Post
    The real big difference here is your drive does not have DuraWrite activated. Maybe there is a good reason for DuraWrite to be activated that we have not yet worked out. There must be something more to it otherwise its hard to see why OCZ etc would activate it.
    AFAIK, the warranty length was just set to 0 or something inconsequentially short. I really doubt doing that could have any cause on these 2x E9 writes.

    Quote Originally Posted by sergiu View Post
    Data posted by Ao1 in http://www.xtremesystems.org/forums/...=1#post4917941 changes the things a little. Could you format the drive using NTFS with 8KiB cluster size and run some sequential writes? By default, NTFS has the cluster size setted to 4KB which would translate to Read 8KB page -> update 4KiB -> write 8KiB back. If this is the case, then with 8KiB cluster size and sequential writes, you should see similar compression rates like all other sandforce ssds. Also, please look with a program like Active Partition Manager for partition offset, because now it must be alligned with 8KiB
    Will reformat and set cluster size to 8KiB between stopping 67% and starting 67% ND.

    Biggest reason I don't think my SSD is writing double what other SF-1200s write: that would mean my SF-1200 is nearly 2x as fast as displayed and doesn't seem plausible. AS-SSD of this SF-1200 (67MB/s), AS-SSD of a Vertex 2 60GB Anvil had/has (link, 94.3MB/s).

    If my SSD were writing 2x what others write, it would be putting down the equivalent of 134MB/s but an IMFT 34nm version of the same drive would just do 94.3MB/s? That doesn't make sense when OCZ knows this is the bottom-bin NAND and only puts it on the cheapest of the Vertex 2s.

    Anyway, will reformat with the 8KiB cluster size and see if that helps random reads, but I'm honestly not expecting much. Sticking with the E9 value to be moving 2x as fast as it should. And it's not all bad, it does give me slightly higher resolution with the compression measuring

    EDIT: deleted and reformatted, here's an Active Partition Manager screenshot of both SSDs in the endurance testing:
    Click image for larger version. 

Name:	apm.PNG 
Views:	213 
Size:	73.7 KB 
ID:	118501
    Even with 8KiB clusters, W7 stuck with 1024K alignment.

    EDIT2: if my SSD were secretly misaligned with 1024K alignment, wouldn't that mean nearly all SSDs with 8KiB pages are misaligned and no one has noticed or said anything until now?

    Doesn't it make more sense that A) this low cost Hynix 32nm is really bad (either overall or with this controller) and B) E9 is moving at 2x by a counting bug.
    Last edited by Vapor; 08-01-2011 at 06:56 AM. Reason: edit2

  13. #413
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    I'm still testing my drive but it doesn't look to change much. (and so it could be that it's 512KB or 1MB block size)

    Here's how to create a partition w/ 2MB alignment. (in case other are going to try)

    Note
    input is in bold
    Your disk layout will of course be totally different so you need to know what drive to select
    In this case I created a partition on a Vertex 2 60GB, it is shown as a 55GB drive.

    1) Delete the current the partition using Disk Management
    2) Open an elevated Command Prompt.

    C:\Windows\system32>DISKPART

    Microsoft DiskPart version 6.1.7601
    Copyright (C) 1999-2008 Microsoft Corporation.
    On computer: BEAST-PC

    DISKPART> LIST DISK

    Disk ### Status Size Free Dyn Gpt
    -------- ------------- ------- ------- --- ---
    Disk 0 Online 55 GB 55 GB
    Disk 1 Online 119 GB 35 GB
    Disk 2 Online 74 GB 0 B
    Disk 3 Online 1863 GB 0 B *

    DISKPART> SELECT DISK 0

    Disk 0 is now the selected disk.

    DISKPART> CREATE PARTITION PRIMARY ALIGN=2048

    DiskPart succeeded in creating the specified partition.

    DISKPART> FORMAT QUICK

    100 percent completed

    DiskPart successfully formatted the volume.

    DISKPART> LIST PARTITION

    Partition ### Type Size Offset
    ------------- ---------------- ------- -------
    * Partition 1 Primary 55 GB 2048 KB

    DISKPART> ASSIGN LETTER=T

    DiskPart successfully assigned the drive letter or mount point.

    DISKPART> EXIT

    That's it, I tried using Actice Partition Manager but I guess I'm just used to DISKPART.
    (it is doable in APM )
    -
    Hardware:

  14. #414
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Thanks Anvil, I couldn't get it right in APM

    With diskpart (your guide was a little out of order, needed to create partition, select partition, assign letter, then format), I aligned it to 2048K and AS-SSD performance was indifferentiable from the 8KiB cluster format numbers I ran earlier today (both were worse than fresh AS-SSD numbers I ran with default alignment and cluster size on Saturday, but that's because the drive is no longer fresh).

    Just did a SE and repeated the align/format process, here's the AS-SSD:
    Click image for larger version. 

Name:	2048kalign.PNG 
Views:	195 
Size:	54.6 KB 
ID:	118533

    Performance is no better than when the drive was fresh with default alignment.

    Will know tomorrow if the 2x E9 reporting goes away, but I bet it doesn't.

  15. #415
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by Vapor View Post
    EDIT2: if my SSD were secretly misaligned with 1024K alignment, wouldn't that mean nearly all SSDs with 8KiB pages are misaligned and no one has noticed or said anything until now?

    Doesn't it make more sense that A) this low cost Hynix 32nm is really bad (either overall or with this controller) and B) E9 is moving at 2x by a counting bug.
    Actually I believe bluestang has the same issue on his V2 60: http://www.xtremesystems.org/forums/...=1#post4892082
    Back then when some of us posted SMART data, he was the only one with abnormal WA. Now I have two theories:
    1) If cluster size is 4KiB, it does not matter if partition is alligned to 8KiB, OS is still issuing write commands in 4KiB chunks and if these commands are not sequential, it will force the controller to write double. So WA should be normal when this is chosen.
    2) What I suspected all the time is that the controller has a buffer for compression equal 32KiB = 8*4KiB pages. This means that the controller waits until the buffer is full then archives the data. Best case scenario, with 0 fill data, the controller is saving all in only one page. Now, the buffer is the same but the page size has doubled. This means that for each 32KiB it will write in best case scenario 8KiB of data. Now what does this mean? If OS would issue write commands in multiples of 8KiB and partition is allign the we should see:
    - compression rate of 25% for 0 fill
    - compression rate of 25% for 8%
    - compression rate of 50% for 25%
    - compression rate of 75% for 43%
    - compression rate of 100% for 67% and more
    - WA between 1 and 1.25 for incompressible data.
    The compression rates I wrote above would be theoretical without taking into account WA. With WA should be a few percents more.
    Now of course, there would be the third theory with doubling but I doubt Sandforce programmers have added such a nice bug.
    Also, it should be taken into account that 4K random reads/writes are no longer making sense to a 8KiB page drive. And the allignment I believe is important for pages but it does not make sense for blocks because physical blocks are no longer mapped to continuous sectors as with HDDs
    Last edited by sergiu; 08-01-2011 at 10:19 AM.

  16. #416
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    ^^ Here's a more recent one. Which ID's you talking about E9, EA, F1, F2 ?

    Click image for larger version. 

Name:	Untitled-3.jpg 
Views:	199 
Size:	112.7 KB 
ID:	118534
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  17. #417
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    @bluestang

    E9 is ~2x Host writes, in normal circumstances it should be 0.6-0.8 * F1, so, about 384-512 would be normal.

    There's something wrong here, could you try reapplying the 1.33 firmware from OCZ?
    -
    Hardware:

  18. #418
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Ratio between E9 and EA/F1 should be less than 1.2x (realistically, less than 1.0x).

    Looks like your ratio is also higher than it should be. Although, by using XP there's a long list of things that could contribute to that (lack of TRIM comes to mind first, legitimately bad alignment second). Do you have AS-SSD screenshot for your V2? How old/new is the V2? What's the unformatted size of the disk (either 55.9GB or 51ish)?

  19. #419
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    Alignment is at 2048K according to AS-SSD.
    Got drive new RMA from OCZ's VTXE fiasco in early April.
    2Xnm drive using 32gbit NAND chips, so 55.9GB.
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  20. #420
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    25nm should be ~51GiB (or GB, as Windows reports); 32/34nm is 55.9GiB, so you have either IMFT 34nm or Hynix 32nm. Sounds like your alignment is good, do you have the performance numbers from AS-SSD? Hynix 32nm and IMFT 34nm should perform very, very differently

    If you have Hynix 32nm, then our drives have something in common aside from the higher compression/WA figures. Still don't know if yours are dead-on 2x like mine are though.

  21. #421
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    There were two different 2Xnm NAND Chips (64Gbit and 32Gbit) thrown into the Vertex2 drives earlier this year. 64Gbit performed like crap and didn't comply with IDEMA and OCZ caught hell for this and issued out free 32Gbit replacements.

    Although, I'm still not happy with the performance...

    Click image for larger version. 

Name:	as-ssd-bench OCZ-VERTEX2 5.2.2011 2-30-5.png 
Views:	264 
Size:	42.2 KB 
ID:	118536
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  22. #422
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    First, the lack of trim on Windows XP does not increase WA significantly or maybe not even at all. I have XP and without counting the GiB burn for tests, I am around 0.6-0.8 WA. But there might be something (my third theory): by monitoring the disk activity with Diskmon (from Microsoft) I noticed that it does not write multiple of 8 sectors. Sometimes it writes one sector, sometimes 4 or more depending on what it needs. Now, if one sector is written, it will probably be registered as 512byte host writes but 8192 bytes NAND writes. I assume Windows 7 is writing only chunks equal with cluster size but I might be wrong. This kind of writes could technically increase significantly WA.

    @Vapor, @bluestang - could test random reads/writes for both 4K and 8K at QD 1,4 and 32 with Anvil's Storage Utilities? Not sure what would happen for reads, but for writes I would expect significant increase in speed at QD16/32 for 8K compared to 4K

    Later edit:
    @bluestang
    Just noticed that in between your SMART post E9 changed with 384 while F1 changed with 256 which would give a WA of 1.5
    Last edited by sergiu; 08-01-2011 at 12:36 PM.

  23. #423
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Hey bluestang,

    Not sure if this works the same with XP, but if you hit ctrl/ alt/ delete > Start Task Manager > Performance > Resource Monitor >Disk you will see Disk Activity and Processes with Disk Activity.

    Maybe this would help to see if there is any write activity that might be unusual.

  24. #424
    Admin
    Join Date
    Feb 2005
    Location
    Ann Arbor, MI
    Posts
    12,338
    Looks like bluestang and I don't have the same NAND....his 4k QD1 reads are nearly 2x mine, lol. 55.9GB with 25nm though, hmmm, didn't know that was available....wonder where they skimped on OP, RAISE or wear leveling/performance?


    Anyway, here's the tests you wanted, sergiu. All tests done with 101% incompressible.

    2048K align, 4096B cluster size:

    RW:
    4k QD1: 46.86MB/s
    4k QD4: 53.87MB/s
    4k QD8: 55.07MB/s
    4k QD16: 54.92MB/s
    4k QD32: 53.31MB/s

    8k QD1: 45.84MB/s
    8k QD4: 38MB/s
    8k QD8: 37.63MB/s
    8k QD16: 40.37MB/s
    8k QD32: 43.14MB/s

    RR:
    4k QD1: 10.57MB/s
    4k QD4: 40.63MB/s
    4k QD8: 60.57MB/s
    4k QD16: 69.5MB/s
    4k QD32: 64.77MB/s

    8k QD1: 16.97MB/s
    8k QD4: 48.34MB/s
    8k QD8: 53.37MB/s
    8k QD16: 57.66MB/s
    8k QD32: 65.52MB/s


    2048K align, 8192B cluster size:

    RW:
    4k QD1: 46.43MB/s
    4k QD4: 44.79MB/s
    4k QD8: 44.42MB/s
    4k QD16: 47.32MB/s
    4k QD32: 41.06MB/s

    8k QD1: 35.68MB/s
    8k QD4: 39.88MB/s
    8k QD8: 37.78MB/s
    8k QD16: 39.51MB/s
    8k QD32: 41.89MB/s

    RR:
    4k QD1: 9.61MB/s
    4k QD4: 42.43MB/s
    4k QD8: 60.26MB/s
    4k QD16: 67.03MB/s
    4k QD32: 65.07MB/s

    8k QD1: 16.3MB/s
    8k QD4: 41.7MB/s
    8k QD8: 59.28MB/s
    8k QD16: 60.02MB/s
    8k QD32: 71.25MB/s

  25. #425
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Not what I expected... seems Sandforce controllers are highly optimized so there is no practical difference between 4K and 8K performance when different pages are used. Most probably, internally a 8K page is considered as two 4K logical pages (like address XXXXX lower and higher part) and controller does a good job merging clusters. Either way, this "optimization" will translate in higher WA because the controller will not be able to cluster pages all the time. From this point, the native 8K cluster support should give an advantage in WA because there is no need for optimization from controller (unless it considers 8K just as a stream of two 4K pages and still apply the same treatment).

    @Vapor - I would propose the following tests if it is possible and look for WA:
    - sequential incompressible 4K and 8K cluster
    - random 8K incompressible 4K and 8K cluster
    I would expect WA = 1<Sequential 8K cluster size < Sequential 4K cluster size < Random 8K with 8K cluster size < Random 8K with 4K cluster size<2 - 2.5. This would imply a long running tests so maybe you could do them as part of the endurance. If there is no significant difference in WA, then most probably internally the controller is always running in "optimized mode".
    This 8K page treatment is worth to investigate also on non Sandforce SSDs, yet without having a definite counter for flash writes, it would be hard to see if there is any difference. Maybe by looking at how fast wear indicators are evolving.

Page 17 of 24 FirstFirst ... 714151617181920 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •