I had originally posted this over in the intel i7 oc thread but I thought I might repost this here.
8x16GB Mtron SSDs in RAID 0 on Highpoint 3520.
I am pleased with the 3520 - I am thinking of buying another or the 3510.
Printable View
I had originally posted this over in the intel i7 oc thread but I thought I might repost this here.
8x16GB Mtron SSDs in RAID 0 on Highpoint 3520.
I am pleased with the 3520 - I am thinking of buying another or the 3510.
Excellent results.....:slobber:
Is that max for the controller ?
Got same 3520LF PCI-E controller - excellent :D
Shame no Mtron SSDs hehe
yep, 8 maxes out the 3520 BUT there is the 3540 - http://www.newegg.com/Product/Produc...82E16816115052 - supports 16 drives!
Holy crap nthat must be a nice OS speed to work with :D
Seems the controller doesn't bottleneck at all, seeing as those are 80/100 rated.
Based on how the areca 1231 (with the same intel IOP341 processor) maxes out at 829 MB/s - I am probably close to max read at 780 MB/s. see - http://www.nextlevelhardware.com/storage/battleship/ about a 1/4 way down the article - hdtach screen shot.
I also wrote up a review/comparison of the areca 1261 with the HPT 3520 - located at - http://www.ocforums.com/showthread.php?t=552010
Reading the start of your review, you need to flash the Controller to the latest firmware, as it enables stripe size setting. I had the same problem on a 3510 a while back ;)
128kb or 256kb is optimal for a SSD raid 0, try both for optimum results.
Can you please do some proper random access testing & Crystalmark? I want to see how much latency that controller adds (I know the adaptecs add a crapload).
perhaps the atto runs here might help - http://www.ocforums.com/showthread.php?t=552010
also 5x16gb mtron R0 on 3520 crystaldiskmark -
Lol ;)
You just ran the bench in the controller's cache. Select test size > controller cache (256MB) ;)
Thanks! But its like jcool said, the test size has to be more than controller cache or you are just testing the speed of the 256MB thats on the controller.
Nice setup!
Too bad i did not see controller that do over 1GB/s with SSD.
I got an average read of 785MB/s with raid 6 @ 10x 15k.5 sas. The accesstime is not SSD fast :(
http://www.xtremesystems.org/forums/...6&postcount=60
I got 998,5 MB/s Average in raid-0 with the same setup :)
Looks like Areca 1680ix is the way to go if you want even faster system Good news is that the new 1.46 firmware is out for 1680 series! Maybe it's good for ssd?
Too bad i don't have many ssd :(
Curiously, how does this setup compare to 2 X-25Es in RAID0?
This setup is much, much faster. And much, much more expensive.
I really doubt 2 X25-E could beat 8 SLC Mtrons. Access maybe, but certainly not throughput and IOPS which makes that array a lot faster in general.
Edit: IOmeter, but its a :banana::banana::banana::banana::banana: to set up ;)
Could you run also iPEAK?
From the areca 1680 thread - is the everest disk benchmark easy to run?
Does it provide good assessment of randon read and write?
I don't have the 3520 on this machine but I do have a 680i mobo raid set up with 4x16gb mtron mobi's in raid 0 -
for comparison here is the linear read test - same 680i nvraid (not 3520) but 4x16gb mtron's -
2x x25-Es get A LOT more IOPS than 8 mtrons ;).
To the OP - passmark is easy to set up to test randoms (advanced HDD tests), but I am not sure how it tests things... i.e. I am not sure if cache will fool it or not.
tests like these:
http://www.tweaktown.com/reviews/172...isk/index.html
I still really doubt that. I'm afraid we'll never know unless he can get IOmeter running...
lol I am not making a blind statement FYI. 2x mtrons in R0 get 30MB/s in mixed writes/reads (one of the standard passmark HDD tests) and 2x X25-Es get 270MB/s in the same benchmark. A normal HDD gets ~3MB/s. I've done A LOT of research before I got my 2 x25-es ;)
Old tech doesn't compare to new tech.
Either way, still nice setups guys!
But I gotta agree with One_Hertz here; everything I've seen would suggest way more IOPS with x25-e; Although it will vary with block size and queue depth, the pattern is consistent across different settings.
Here is some single drive IOMETER data on ICH9R (have a look at the links for more, and different block sizes; also you there are other comparable tests on other SSD's there if you dig around)
MOBI 3000
http://i40.tinypic.com/11gunoy.jpg
X25-E
http://i43.tinypic.com/33xae8n.jpg
x25e on ICH9R
http://forum.ssdworld.ch/viewtopic.php?f=4&t=81
MOBI 3500 on ICH9R
http://forum.ssdworld.ch/viewtopic.php?f=4&t=68
Have a look at the individual IOMeter results for single drive. There is no way that RAID0 can get >100% scaling per additional drive. (So if we compared apples to apples on the same controller, 2*X25-E in RAID0 should have way more IOPS than 8*MOBI 3000 (FYI the newer MOBI 3500 have about 1/2 the IOPS as the MOBI 3000, but slightly higher STR values; those testing results are in another thread if you look)
That workstation pattern? Or Server? I guess you are right then... crazy on the Iops. Still, I'd love to have the throughput of that setup more :D
Throughput means absolutely nothing. Companies don't even bother making their SSDs actually fast, instead they try to increase the throughput so people like you buy their slow SSDs :shakes:. I don't even look at that figure anymore. It is much more useless than even 3dmark scores for GPUs.
I agree with you both; it really depends on what you are doing/what apps are being used. I think it's important to look at the whole picture.
But too much emphasis is placed on HDtune/HDtach because it's easy and convenient to test. But if you had to look at 1 test that tells you the most information, it would be the complete battery of tests from IOMeter. It will tell you everything that that HDtune/HDtach will tell you and much, much more.
jcool - "workstation" pattern for IOMeter is defined as 8KB / 80%read / 80% random
If you are working with uncompressed 1080p material that doesn't fit your ram - you can never have enough throughput...
Well what he has would be just enough to edit 1 large file + host the OS/favourite apps. Ofc you'll need a Raid 5/6/10/50/whatever with mechanical drives to work with as well, which is what I use now. Just saying it would be cool to have both (for me).
:rofl: :clap: i expect better from you one_hertz :shakes:
exactly :up:
thats smtg forgoten here @ storage subforum :shrug:
well you guys put too much emphasis on IOPs.. :shakes:
higher IOPs dont mean sheat @ real world apps :up:
:rolleyes: 4x ssd = 8x adfd raptor
when:
the more the faster the merrier
and you guys put too much emphasis on benches where cpu/ram/mobo/os greatly influences the results
Good morning, I'm back.
Finished ocing my new memory so back on the SSD topic.
I flashed my Highpoint RR 3520 to the new firmware and tested all the different blocksizes - found that 256 was the best.
Resulted in a little improved performance - see below - 8x16GB Mtron Mobi's in Raid 0 on Highpoint RR 3520 IOP341 PCI-E card :)
Informational note - one of the eight mtron drives is a 3500 (the others are mobi 3000s).
I'm sure that is hurting me - but how much? Hard to tell.
EDIT - removed incorrect Crystal Disk Mark (too small, run out of controller cache).
Edit- removed.
There was a request to show some PerformanceTest results.
Very easy to run by the way - disk results -
For comparison purposes here is the combined results for PerformanceTest cpu, memory and disk tests -
Lastly, I finally spent some quality time with the iometer users manual.
To provide a compare to the intel SSD, I set Iometer to run 64KB, 100% random, 80% read and to run Queue depth of 4, 16, 64 and 256 -
edit - removed - incorrect (apples to oranges) comparison.
Your crystalmark is set at 100MB, which means you are testing the controller's cache. Set it to 1000MB to test your array (I think your cache is 256MB).
I have no idea why your performance test scores are so low, they shouldn't be. Try 100% sequential with 16MB blocks 100% reads with IOmeter and see what sequential rates that shows; it should show similar rates to hdtune. Maybe performance test is just gimped... works for me though. As far as I remember it shows 450MB/s reads 300MB/s writes and 270MB/s rs+rw for me.
You can't compare the IOPS of your setup with a raid card to a single x25-e on onboard controller. If you take my setup 2 X25Es on adaptec 5405 I get ~3k IOPs in that test, over 2x higher.
I see it - 450MB/s reads 300MB/s writes - never mind.
CDM rerun at 1000MB - This doesn't look right does it?
iometer 100% seq read and write for 8x16gb mtron SSD on HPT 3520 -
performance test just doesn't test your system correctly. Neither does crystalmark it seems... Just shows further that there aren't any decent benchmarking programs for SSDs at this point :(. Except for IOMeter of course :D Kind of odd though because everything was in unison for me and all benchies seemed to work right.
You have posted the wrong graph. It is about 400-500% better (not 50% better) at equivalent 64k block size
It really depends on what scenario, what apps you are using and how they are being used
Remember those results are single drive results, on the same controller. How they scale with different controllers is a whole other topic :)
Hi,
Do you have your OS installed on the mtron array? If not, have you tried? I have the Rocketraid 3510 and 4 OCZ SSDs and the ga-ex58-ud5 motherboard and can't see the card in the motherboard's BIOS. If you do have an OS installed, can you tell me any and all settings or procedures you followed that might be relevant to me?
thanks
heres 4 intel x-25e"s on the rr 3520 controller..what do u guys think as a comparison.
Good morning,
I have the ggbt extreme but pretty much the same as the UD5 - I have had no problem booting from the array when it was attached to my extreme.
I am currently using my 3520 on a different machine though.
I have used the 3520 on at leat 3 different mobo's and not had any problem like what you are describing.
Did you try different ggbt mobo bios?
Make sure you have the latest firmware loaded on the 3510.
Also - not sure but it might be that you need to have at least one drive set up on the 3510 for the bios to recognize the drive connected through the 3510.
Woooa!
You should do IOmeter full test and 4k read/write test from OCZ so we can see how it blowz the Zdrive away :D