+1+1+1
:d
Printable View
+1+1+1
:d
WOW! Reads I can digg, but write - WOOOW. (and I don't mean world of of of warcraft :D)
I wonder how different stripe sizes will do for these in RAID, please try out 4K/16K/64K strupe with 4K/64K random reads/writes in IOMeter. (I can send the config file if you need it).
So Jealous!!!!!!! :)
Results already!!!!!!!!! ;)
ok, after many many hours I am done. Used 128kb stripe... any lower resulted in a good deal slower performance.
I am not too impressed tbh. The read IOPS are very laughable (que was 1 in my tests). If que is 32 then it gets around 33k 4k random read IOPS as advertised by Intel. Hitting CPU limits in a lot of places... all the game loading times scale linearly with extra cpu speed. Also as you can see by the last test doing multiple things to the array at once doesn't even slow it down at all... I should have been sleeping over an hour ago so I am done for today... will do more testing tomorrow.
Passmark results with same settings as the tweaktown review linked earlier in the thread. A good deal slower than the iodrive...
Passmark – file server - 181.86MB/S
Passmark – web server - 263.55MB/S
Passmark – Database - 219.53MB/S
Passmark – Workstation - 56.6MB/S
Everyone is free to use and repost the below image anywhere they want.
~4K random writes and ~12K random reads? Should that not be in reverse?
One_Hertz
Thank you man!
Well, according to my standards (low budget :)) I see a small increase in performance in real life applications, for the price of the Intel ssds.
Do you think that the intel ssds will have lower performance on the ICH9R?
I want to exclude any performance increase due to the controller.
Other than that, you system must be a rocket :D
That's what you would think but no... not in reverse. Tried it many times. The cache seems to be inflating the random writes number and the random reads are pretty sucky in a single user environment. Like I said before, add some threads and the random reads spring up to about 33k IOPS.
Realistically, I've found there isn't a lot of reason to get the Intels for a single user; at least at the current CPU speeds (and my CPU isn't exactly a slouch, 4ghz qx9650). Yeah, they are the fastest and RAIDing any number of the OCZ drives together won't make that array quite as quick (because it will still bog down here and there), but the difference currently isn't massive nonetheless. The biggest benefit of the Intel drives is that I can have lots of things going and all the tasks will be going at full speed (I don't even know why or how this happens). I can't bog the array down and make it slow down no matter what I do. It was very easy to literally freeze everything on the OCZs and I constantly had to adjust my usage patterns so that it wouldn't freeze up.
With all that said, I won't be returning the Intels because of how consistently quick they are. With the OCZs it was always a question of whether it will be fast or if it will freeze up on me. The Intels are always very fast regardless of the workloads.
If I was not the kind of person that would change every part in his computer every 6-9 months then I would have returned the drives...
I really don't see any purpose for faster storage devices. Everything that is important is limited by CPU speed right now in my system and adding more storage speed will do nothing to the system, so in a sense storage speeds are maxed out for the time being.
I still welcome any requests for benchmarks and will do everything that is asked.
It be interesting to notice the new ssd disks with new controllers and how they behave.
I use raid0 ich10r and I notice some freezing when doing writes, however my use of computer is light.
overall, works well.
happy with the raid0, going to get a raidcard to get the best out of the system tho
First of all, thanks a million for the very easy to read benchmarks. :up:
Seems to me that the raid controller offers almost as much value as the SSD themselves.
This comment is a little confusing. If you were the type to keep hardware for a while, why not keep the drives? They seem like top edge drives that offer the best performance that would stay top performers for a decent amount of time? :shrug:
One more comment...in general...under what reasons would you return the drives in the first place? Please don't tell me you return something just because it doesn't measure up to your standards? :shakes:
With the speed the SSD market is moving, it will be very easy to get matching performance (probably lower, but limited by the CPU speed anyways, so matching), higher storage, and for less $ quite soon. Which is why I said that people who like the keep their hardware should probably wait just because 32GB is very tiny and will be less useful by the time windows 7 hits off...
I would have returned them had they been slower. What's the issue with returning things anyways? The 15% restocking fee is there for a reason... not like anybody but myself would get hurt by the return. There aren't any decent benchmarks/comparisons on these drives anywhere on the internet so for me it was sort of buy it and try it.
Yep :D
Although with traditional hard disks like slower 7200rpm SATA drives you won't see much of a difference but you can really uncork some performance with quality high performance drives vs. typical low cost onboard controllers.
...
Thanks for taking the time One_Hertz to do all those benches, those intel E drives are awesome. Hopefully SLC drives keep coming down in price as they're absolutely marvelous if you are latency sensitive in terms of the apps you use.
There must be something wrong in your PCMark05 numbers...
Here's my single X25-E on ICH10 (no h/w RAID cards etc)
http://img262.imageshack.us/img262/8058/pcmark05va4.jpg
Thx for the testing One Hertz!
Here is some more info for comparison purposes (from the source listed below)
x25-e NV 780i
http://i43.tinypic.com/xmn5o2.jpg
x25-e ICHR9
http://i43.tinypic.com/fydg08.jpg
There are many other tests for you to compare with for single drive e.g. h2bench, atto, crystal diskmark, limited IOmeter testing...
x25-e NV 780i
http://forum.ssdworld.ch/viewtopic.php?f=4&t=84
x25-e ICHR-9
http://forum.ssdworld.ch/viewtopic.php?f=4&t=81
From chosen's results, looks like ICH10 >> ICH9 for these SSDs if we just look at PCMark 05
Quite frankly at this moment I think the hardware raid controller adds a lot of latency to the Intel drives and it may even be faster on ICH9R. I will test...
As for the OCZs - on the ICH9R it was plainly unusable for me. On the hardware controller it was OK as long as I only did one thing at a time. For example I could browse firefox fine as long as that was the only thing I was doing. If anything else was happening in the background then it would stutter. On the ICH9R it would stutter when JUST browsing firefox and nothing else. You have to understand how the drives work. When you send them a write request and then a read request they HAVE TO FINISH the write request BEFORE they can get to your reads, so a stutter is you especially waiting for it to finish its writing activities (which can take a noticeable amount of time on MLC drives with the jmicron controller).
Great great work! Thank you Very much!
Testing them on ICH9R like you plan will also be interesting to see!
I really hope you get your hands on some internally raided SSDs that are comming to the market as we speak as well as Vertex drives and do the tests on them to.
Those new drives will hopefully deliver what they promise for the mass consumer and not disappoint under test you are testing!
Try disabling every cache option (except disk cache), and rerun the IOMeter. The random writes should arrive at the right spot then at least, not sure of the reads. (and you don't really need the read-ahead on these).
~12K random writes @4K block = ~50MB/s, presuming 10 sec run time = ~500MB - just about right for the cache to influence it.
It can't add latency, otherwise the queue = 32 random reads would not surpass the 12K mark.
Exactly what I was already going to do when I got home. Great minds think alike :)
But idk about the latency part. HDtach/HDtune showed 0.2ms. These drives should have 0.075ms.
Edit: IOMeter was running ~25min for all benchies.
Edit2: Turned off all caching and random write IOPS are 1800 for queue of 1 and 4400 for queue of 32. Random read IOPS did not change. Will test on ICH9R later...
maybe, ""latency part. HDtach/HDtune showed 0.2ms. These drives should have 0.075ms"",
the above tools are not exacting enough to properly assess latency with this technology?
:D
Yep, random writes are now within of Intel specs.Quote:
Edit2: Turned off all caching and random write IOPS are 1800 for queue of 1 and 4400 for queue of 32. Random read IOPS did not change. Will test on ICH9R later...
I didn't expect much change for random reads, though.
The latency sounds right for queue=1 though, since at 4K IOPS = 1s/4000=0.25ms.
NCQ?
Did you try on ICHxR instead of Adaptec?
Yes :(
It got 6.1k IOPS... Same idea as with OCZ drives. The adaptec 5405 INCREASES latency by 30-35%... That is so depressing I don't know what to say.
I've done A LOT of looking around and it seems like ALL raid controllers do this. I am so surprised I have never heard people mention this! We need proper SSD raid controllers that don't gimp the latency!
Personally, I have used both SSD 2x mtron in RAID 0 and Velociraptor 2x RAID 0 and I couldn't feel any real difference. At least not where it counted. Sure, some minor things were maybe a wee bit faster, but overall I am firmly of the opnion that the people who rave about how much faster their new SSD array is over the better disk-based set-ups are mostly falling prey to the "It must be better because I just spent $$$$", effect. It "feels" so much better because they dropped so much cash nothing else would be acceptable. Everyone posts up HDTach and HDTune all the time but those benches are just misleading.