Forgive the noob question - but what/where is the Xtreme section?
Regds, JR
.... OK - I've found it:)
Printable View
Heya JR...good to see ya round. ;)
Guys, what should be the best strip sizes for 2xR0 vertex 4 - when on -
ich10r
pch
areca 1261
areca 1880? :)
Hi Steve,
I wish I could tell you :)
On ICH/PCH there has been numerous changes with the latest drivers, some prefer 10.x and some find 11.x to be fine.
I mostly end up using 64KB or 128KB strip size but I wouldn't rule out that smaller strips could be better performers on some benchmarks.
As for the Vertex 4, I'll order another drive right now and might join in on some of the testing.
Is your PCH on Sandy Bridge or Sandy Bridge-E ?
I did update to the latest fw on the 1880 but never got the time to do a proper analysis of what had changed, it does look to have changed a bit.
Started a new thread in the extreme storage section - "Vertex 4 F/W 1.3 vs 1.4 across ich10r, pch, Areca 1261 and 1880 and ... " -
http://www.xtremesystems.org/forums/...-and-1880-and-...
:)
Well I decided to get 2 of these guys for a Z77 R0 hope everything works... I will be updating all FW from my P67 board... I really wanted the Plextor Pro but just way to pricey at the moment.. And I would say coming away from 2 x 64GB M4 in a R0 I should show more improvement then I was getting...
My theory of operation on the Vertex 4 128gig: (note, I do not work for OCZ, but this explaination seems to fit the observations seen by me and reviewers of this drive. It could still be wrong)
I think I understand now why the vertex 4 128gig on firmware 1.4 performs the way it does.
The way OCZ have programmed the Vertex 4 128gig firmware is quite ingenious actually.
For MLC NAND, as discussed in this paper (http://cseweb.ucsd.edu/users/swanson...11PowerCut.pdf), pages are actually programmed twice to store 2 bits per cell. The first time a page is programmed from erased is very fast, as it is going from a known state to a roughly central voltage (or staying unprogrammed). The second time a page is programmed is quite slow, because there is existing data that needs to be conserved, while the programmed state is adjusted by 1/4.
So what the 128gig vertex 4 is doing while more then half the drive is free, is programming only the first layer of each page. This is the performance mode and performs more like SLC NAND rather then a normal MLC drive would, resulting in amazingly high write speeds on the 128gig drive (350meg per second or more). However, once you reach 50% full, the drive no longer can perform any first layer writes, and now must write the much slower second layer, which seems to take up to 4 times as long to do. This is the switch to storage mode is.
During storage mode while the drive is more then half full, the drive will maintain 2 bits per cell, but when garbage collection is performed, it will attempt to free up NAND blocks so future writes can be performed only on the first layer, resulting in pretty good performance for burst writes.
Other drives, along with the Vertex 4 128gig under firmware 1.3, must distribute first and second layer writes evenly, resulting in a more consistent, but quite a lot slower, performance over the full surface of the drive.
The reason why the vertex 4 256, 512g don't really show this behavior is that it isn't needed for high performance ... the controller can interleave enough operations that, at least for the 512gig drive, it can distribute writes to the first and second layer evenly without a performance impact.