So i have a question, how much a boost would you see with SATA 6gb/s with a Crucial C300? Compared to a Crucial C300 with a 3gb/s SATA.
Printable View
So i have a question, how much a boost would you see with SATA 6gb/s with a Crucial C300? Compared to a Crucial C300 with a 3gb/s SATA.
Im about to find out. I seriously doubt its a substantial difference. just get the newest one so u can add more later.
Well, according to the specs in the link:
With write speeds staying at 70MB/s. I'm not sure how much of 355MB/s is fluff or under ideal conditions, but even half of the difference between the stated SATA 6 and SATA II speeds is decent really.Quote:
355MB/sec (SATA 6Gb/s) [read]
265MB/sec (SATA 3Gb/s) [read]
6Gbps SATA Performance: AMD 890GX vs. Intel X58/P55
http://www.anandtech.com/show/2973/6...ntel-x58-p55/1
I thinks it's fair to say that so far we have not seen an optimal controller for SATA 6 yet, but no doubt they will arrive soon.
The latest LSI and hopefully soon the Areca 1880 are/should be good 6Gbps controllers - but expensive relative to onboard.
It depends on the specific application you have in mind. The typical user NEVER uses high sequential reads and as it is so that number (355MB/s) is really mute. You will see no visible performance difference and I can attest to this because I am running a Intel X25, a Crucial C300 at top speed in SATA 6Gb/s and OWC in the same system right now.....visibly identical in performance.
referring to onboard, right? i would agree strenuously. the current 'crop' of onboard 6gb/s onboards are a joke, the main issue being bus strangulation as they are hacking them onto motherboards for marketing purposes. as for raid cards, really there isnt a good 6gb/s SATA solution either, only SAS/SATA at this point. there are some great sas 2 cards out right now, but only form you-know-who. I am not sure if you will see many high-performing sata 6gb/s devices for a while in the wild, as I am not aware of any manufacturers working on a 'flagship' class adapter. right now the only ones ive seen are the puny ones (ref. highpoint).Quote:
I thinks it's fair to say that so far we have not seen an optimal controller for SATA 6 yet,
i think from a manufacturer standpoint right now sas/sata implementation with the newer 6gb/s protocols are plenty fast, and make more sense financially, you get to target both audiences.
I have used previous gen sas/sata (notably the 1680-ix) and compared to the newer gen there is a much bigger difference. they are so much better. the sata tunneling protocol is miles better, or maybe it was just the under-performing ROC;'s that where the issue with sas/sata performance loss.
*cough cough*Quote:
The typical user NEVER uses high sequential reads and as it is so that number (355MB/s) is really mute
Hey I just noticed the cough cough...eheheh Can you give an example or two where the average Joe would ever rely on high sequential reads...and writes during daily use/
Video editing used to be bottlenecked by CPU & memory but with 64bit computing the emphasis shifts to storage. As far as I can establish the most demanding storage application comes from digital video capture in an uncompressed HDTV format. Depending on the frame rate and resolution, you need write speeds of at least 120MB/s and as high as 180MB/s.
For the average Joe however faster sequential speeds don’t translate to much of an advantage.
The significantly faster sequential speeds of the C300 in comparison to the Intel drives do not translate to any real advantage in loading times. The Intel is actually slightly faster at loading than the Vertex 2. (If you can believe the benchmarks below).
The faster sequential writes speeds of the C300 & the Vertex 2 can be seen in faster installation times, but it’s nothing that reflects the huge difference in sequential write speed specs.
Bottom line is that SSD’s are being bottlenecked for desktop use, so faster sequential speeds give little if anything in typical everyday applications.
http://img806.imageshack.us/img806/2663/10495479.png
For file copying the files have to be large for sequentiial read speeds to be effective. Smaller & medium size files start to even out.
- A collection of large files: 6.8 GB on average
- Medium: 796 KB on average
- Small: 44 KB on average
http://img840.imageshack.us/img840/2606/78861768.png
http://www.behardware.com/articles/7...-compared.html
A 50Mbps 1080p stream = 6.25 MB/sec [Megabyte-per-second]
You would need forty 50Mbps 1080p streams to get to 250MB/sec so the comment below on the Anandtech article on the G3 specs seems to be correct.
“So you have multiple 50Mbps streams? To saturate 250MBps, you'd need roughly 40 of those running simultaneously. Maybe that's a realistic scenario. Audio doesn't use nearly as much bandwidth as that. Even a 96KHz 24bit audio stream is only about 300 KB/s. Mixing 100 of those together is only roughly 30 MB/s. I suppose that limits your total number of multiple 50Mbps streams to only 32 video streams (with leftover bandwidth).
I suppose you also have the writing to take care of, so you could halve the above numbers (50 audio tracks plus 16 video streams). Is that too few? I don't work in the music or video processing business, but based on what you threw out there, and some simple math, the 250 MB/s is more than adequate for the scenario.”
http://www.anandtech.com/show/3965/i...ealed?all=true
There is also a nice explanation of why TRIM in a raid config is so hard to achieve in one of the comments on the link above. It’s the most plausible explanation I’ve seen anyway.
Some facts:
-Any OS uses one (or more) page size for all its io
-A RAID array is build using a mix of a "stripe size" on each of the drives participating in the array
-TRIMing less than a stripe is a very difficult task even for RAID 0
Example with any Windows 7 installed on a RAID 0 array build with 2x HDD and a 64KB stripe
==> The OS cluster size is usually 4KB, so you may (simplified process):
-Write a small file on the first clusters (#0 = 4KB)
-The RAID controller read the first 64KB stripe of the first HDD and rewrite it with 4KB updated
-Write another small file on the 2d clusters (#1 = 4KB)
-The RAID controller read the first 64KB stripe of the first HDD and rewrite it with 4KB updated
-Delete the first file will send a TRIM command to #0
==> Now, if the SSD TRIM this #0, what would be the 4KB values read from the first 64KB stripe for my 3rd small file to be written ?"
Vhaara, to answer your question the faster sequential speeds of SATA 3.0 are currently off set by slower random reads/ writes and higher latency due to limitations in the available SATA 3.0 controllers on the market.
If you really need faster sequential speeds for a specific application you might see a benefit, but you might lose out on access times and random performance. (Not that this would really make any difference in typical OS usage patterns).
That is my understanding anyway.
Why suddenly everybody is talking about no visible different, no typical different, and such? Why can't we talk about real difference?
I didn't notice any visible or typical difference when i moved from my 1. gen OCZ Vertex30GB to Vetrex2 60GB and then c300 64GB later ;).
A single c300 on SATA3 is better than on SATS2. Maybe not visibly, or typically, but it's better.
I ran C300 on ARC1280 and LSI 9211. The LSI 9211 was CPU bottlenecked.. or so it seems because it would hit 25% usage and not go over (single core of the CPU).
Even though IOMeter should be multi-threaded, it doesn't mean the OS scheduler can be ;)
On LSI 9211 I can get >200MB/s for both random reads and writes, which is more than what Anand and others got in their reviews by a noticeable margin.
I disagree you won't see benefits of large sequential speeds; I work with VMs a lot, and the extra speeds make C300 a lot faster than X25-E (X25-M is not even a player in this field!).
Now as far as TRIM into RAID goes, I disagree with the mentioned logic. The OS needs to know, at the least, the page size of the SSD and it would be optimal if the cluster was the same size (or a multiple of it). If the cluster size is smaller than the page size, you will see deletes that cannot send a TRIM command "correclty" - i.e. the last SSD page of the file would only need to be partially invalidated...
It's the same with RAID TRIM. Make the stripe a multiple (or equal) to the SSD stripe size. Make the cluster a multiple of the SSD page size.
There's no problem in sending a TRIM there.
Feel free ;)
That is the point I try to make. The sequential speeds of the drives you have mentioned have improved but you see no benefit. Why? Because whilst sequential speeds have improved access times have not and the higher IOP capability of faster drives is of no use for desktop use.
Let’s say I’m capturing uncompressed HDTV at 250MB/s.
To quote Tony again:
"Here is the thing, if you hammer the driver with say enough writes that the drive would under normal use/see in 7 days within a few hrs, the drive will slow down for 7 days, maybe longer. It does this to protect the nand life. So your guys seeing a 50% drop may actually see 30% which is the normal drop, then a further 20% because at some stage they have hammered the drive and then not realised it’s going to take 5 days or longer for the speed to creep back up. Also remember this write quantity slowdown is further impacted by how you use the drive after you have hammered it."
Tony does not say what an average user would write per day, but let’s use the Intel X25-M spec of 20GB of host writes per day as an example.
20GB of host writes per day x 7 days = 140GB (143,360MB)
If you write at the claimed sustained write performance of a SF drive at 250MB/s in 573 seconds (~10 minutes) you could write 140GB. Within 10 minutes of use Duraclass has kicked in to prevent the nand from wearing out and performance has halved as a result. I can no longer capture uncompressed HDTV at 160MB/s.
The point I try to make is that sequential read/ write specs sound great but in reality past a certain point they are more or less useless for desktop users.
If I wanted to capture uncompressed HDTV at 160MB/s I would use HDD in a raid set up not SSD because it would kill the nand in no time.
Maybe I’m seeing it wrong? :shrug:
C300 performs better on SATA3 for sure, but it performs so good on SATA2 that you won't notice any difference, typically or visibility ;). Considering this, can we say current SSD have reached the "overkill" limit already on SATA2 for average user, and OS-Disk?
That would be my conclusion.
In terms of what is the fastest consumer SSD I would say the C300 is technically the fastest (on either SATA 2 or 3) and I would put the X25-M & Vertex 2 at joint second based on an all-round performance assessment. You could find loads of benchmarks that might agree or disagree with that and I wouldn’t bother auguring with anyone on that basis and in the context that it doesn’t really matter anyway to most end users.
I’m also yet to be convinced that SATA3.0 will bring any real benefit for desktop users. When hardware and software becomes more SSD user friendly things will change, but when that will happen I don’t know.
+1 with Ao1, -- in particular OS optimizations.
Over the years, OSs have been heavily optimized and tweaked to reduce the "system" impact of slower rotating storage (much slower relative to bus, memory, cache and cpu).
How much faster can the OS execute once engineers streamline for much faster SSD access times ... should be interesting advancements ahead. :yepp:
I agree with Ao1 on highlighted comment.
as far as SSD's go; What would you guys recommend buying right now. I was thinking 120ish gig or possibly 2 64ish in raid. A single 64 I think will just be too sm.
I really need a new drive for my main system and do not feel like waiting.
My initial thoughts were to go C300, but reading an article on Anandtech said for raid or a lot of writing; the SF drives would be better because the C300 performance diminishes a lot faster. I do not even know if it would be noticeable.
Then today someone mentioned an issue of the SF drives losing data or reverting back to an earlier time of data saved into memory:shrug:
I would love to get the C300 256gig, but thats a bit more than I want to spend right now.
Any opinions would be greatly appreciated. Thanks
Yeah there is a 7 page thread on that at on OCZ’s forum. SF has been able to reproduce it, but there is no fix at the moment. It doesn’t seem to be a huge issue though in terms of occurrences, but a little worrying none the less as no one is saying how it can be reproduced.
The C300/Vertex2/X25-M are all great performers.
Thanks,
I will just look for the best deal on the size I would like then.
Wonder if the SF drive issue pertains to all manufactured SF drives or just mainly OCZ? I would think it would be any SF drive.... Makes a person Sceptical. I am sure all of us cannot afford to have this issue
What you guys are saying, actually explains Intel's move for bigger and cheaper SSD, but not faster one, for G3.
I personally noticed a BIG jump from HDD to my good old OCZ Vertx30GB, but have tried really hard, and didn't otice any typical or visible ;) difference after moving to Vertx2 and c300 later.
I guess it's safe to say, we need cheaper and bigger SSD now, not a faster one.
Maybe it's not because of SATA 6Gbps but because random read and QD=1 did not improve since Intel X25-M/E?
20MB/s is what we could get on Intel's and it's what we get on SF drives and C300...
QD32 of >200MB/s is nice (and useful for me), but not ever used by an average user.
You're looking at the wrong culprit ;)
To jump in real quick since my comment seems to have revised the thread, I did refer to the 'average Joe' or typical user activity in which they will never (or may never) have reason to utilize high sequentials whatsoever.
yes they are for desktop usage.
one thing that is still overlooked here alot by people is the typical user pattern though.
when reviewers test load times, install times, etc, they are done with the disk usually with nothing else on it. NOT with an OS, or if they do (rarely) have an os installed, they only have the OS, and nothing else.
what about desktop gadgets, antivirus, mail, browser, media player, FPS measure, everest, etc? this is what typical users have
now lets look at my not-so-typical usage pattern (multimonitors FTW!)
i play almost all of my games windowed so that i can do other things at the same time.
so what was i doing last night?
i watched the first miner come out live streaming from CNN (in HD) from the mine in Chile while i video chatted with Tilt in Greece. I also had my media player up because i jam when i play, at the same time i was playing farcry2 windowed. i also had my email up because i leave it on as a matter of course, so that when i get emails i get that nice noise :)
now i also have several desktop gadgets that are running to tell me speed of my internets, the weather out, and the cpu/ram usage, and the system monitor.
so not necessarily normal usage but very heavy. YES i am running Kapersky, and several browsers so that i can keep an eye on the forums...
However, i can do this, lag free and i mean totally lag free. on the game, on the video from cnn while talking with tilt, and some music playing ever so lightly in the background.
Now, that is some heavy stuff there. And smooth as glass. everyone brags about how fast things load, etc...but what about how well they RUN once loaded?
where are the benchmarks for performance with that? i put up some example pics of me doing precisely what i told you tonight while chatting with tilt, to illustrate my point.
Now, the last pic i am showing you is of my disk read per sec activity while doing this. note the spikes. the graph is set at 70, not 100 for illustration purposes. now, look at the spikes in disk read as i do all of this.
the faster that you resolve those spikes (by having ultra fast i/o) the better your system will run. and mine is smooth as glass even under this load. the CPU and RAM arent being loaded very heavily, it is all in the disk I/O that is making it smoooothhhhh. most peoples systems would sputter and hitch and lag like crazy.
during gaming they say around 70 percent of access is sequential, so yeah, resolving those huge sequential read requests as quickly as possible is crucial to doing things of this nature effectively. that way the system can also do the other myriad of accesses that it is being required to.
BEWARE the "standard" tests. there is nothing standard about them. they are as far from real world as you can get. how many people set there and load games off a SSD with nothing else on it? or an OS with just a game and nothing else at all running? well, probably people who need SSD's with ultra fast speeds :)
*LOL i always love that the FPS is showing how many FPS i am getting to tilt in Greece :) )
http://i517.photobucket.com/albums/u...9-19-19711.jpg
http://i517.photobucket.com/albums/u...9-19-09570.jpg
http://i517.photobucket.com/albums/u...9-19-03544.jpg
http://i517.photobucket.com/albums/u...9-27-16178.jpg
@computurd: what drive(s) are you running?
five 30GB ocz vertex (gen one dinosaurs) on a areca 1880 controller with 4GB of cache. and for those who want to cry CACHE! i can tell ya i have also done this type of stuff easily with the 9260 (512mb) and the 9211 (no cache), the only constant is that they are all 6gb/s cards. yeah not standard stuff, but shows the benefits of these types of speeds.
another thing that is not being mentioned here is that 6gb/s raid cards have better latency, even with 3gb/s devices. there are many *previously* documented cases of guys getting .05 latency with intels, and c300's, on 6gb/s platforms. can your 3gb/s interface do that? hell no. now, once we get some more advanced 6gb/s drives out, and put them on 6gb/s controllers (and onboards, soon) then we shall see ultra low latency.
now, lets talk about some other things that are benefits of the 6gb/s protocol. all we are mentioning here is the maximum speed.
that is not seeing the forest because the trees are in the way.
there are many more important advances to 6gb/s than maximum sequential speed.
random access has been mentioned as what is most important. the 6gb/s protocol is way faster at random access than the 3gb/s. it is a matter of course that it will be better. that's why they do these things.
*Isochronous Streaming command Native Command Queuing (NCQ) streaming command to enable isochronous quality of service data transfers for streaming digital content applications
*An NCQ Management feature that helps optimize performance by enabling host processing and management of outstanding NCQ commands.
*Alignment with the INCITS ATA8-ACS standard.
now, NCQ is what drives random I/O to ever higher peaks. there is an enhanced instruction set for NCQ with sas/sata 6gb/s. this increases your random performance. also increases the reliability of write combining with SSD usage, so that you are writing more effectively.
also, Isochronous Streaming should speak for itself. this is awesome stuff here.
not to mention that the processors other components involved are faster than the 3gb/s counterparts, thus resulting in faster everything, random included :)
these are the advances that 6gb/s gives us. to say that it is not needed and not better is being misinformed. this is an entire specification, that has MUCH more to it than the fact that it increases sequential speed.
Computurd....
Gotta like your setup as i just ordered my 28" to enhance my 22" this morning...
yeah man i love multimonitor, been doing it awhile. even though i run windowed usually a game or browser takes up my whole second screen. i just pulled down the game a bit to show the browsers behind it.
I have four screens and have even experimented with using three and four at once, but that is a bit overwhelming even for a power user :)
the whole thing really opened up for me when one day my buddy said "hey run your game windowed so we can chat while you play"
increases productivity for sure.
playing with the sound mixer is key though so i dont get too overwhelmed with different audio outputs. i find my eyes can track three or four things at once, but my ears can only follow a few...so i put the music low, the game sounds in the middle, and the chat/video streaming at high. soemtimes have to cut out audio on a few things (like the miners for example)
strange that it works :)
Yep that’s about it. :rofl:
The other thing on my wish list is a budget 1TB SSD for around £300 so that I can get rid of my storage HDD’s. All I care about for static data storage is fast reads and data integrity. I don’t need a high performance SSD for static data and I wouldn’t care about write performance or write endurance. TLC NAND might be able to deliver the capacity and price, although I’m not so sure about the data integrity. I’ve asked Intel if they would consider such a product.
I’m open minded on SATA 3.0. So far I’ve not seen anything of interest, but its early days and no doubt things are cooking behind the scenes