These SSDs would go with my ARC-1231ML2G raid controller.
Printable View
These SSDs would go with my ARC-1231ML2G raid controller.
BAH!! How do I edit my poll options(never done a poll b4)?
X25-E 32GB x4 pretty obviously. :shrug:
Slightly more space (128 vs 120) and at least double the bandwidth of the other two options.
same space... I would have to say the four x25's too.
four of them or i would go with the 256 gb x4 from intel (i think have it)
Isn't the X25-M faster than the E?
4x X25-M.. the E is a waste of money.
What will these drives be mainly used for?
Your not going to be able to take advantage of sustained write speed with that workload, it looks to me they are all CPU & GPU bound in terms of write speed. As someone above said, I think an SLC would actually just be a waste of money for what you are going to use it for.
If I personally had $1500 right now to spend on storage, I would get 3x x25-m 160gb on ICH10R. You will have all the write performance you need, plus you will have allot more space to but more data on your SSD's instead of HDD.
Or wait 6 months and spend the same amount of money and you should get some better performance and possibly be able to ditch you hdd completely.
I went with X25-M 80GBx4. Thanks for all your input guys
4 x X25-E 32gb no doubts about it. Probably the most stable SSD platform out there atm, speed would be great if put into raid, no actually it'd be sick.
14 1TB HDDs (500GB platters) in RAID 0+1
:P
That's how people end up wasting their money! 4x raid 0 looks to reduce loading performance (the thing that makes your OS feel snappy) by a surprising margin Sometimes by as much as 12-15%.
Unless you wanted to copy and paste files to the same partition all day you would have to be a fool to go for this option with his workload.
http://techreport.com/r.x/intel-x25e-raid/time-boot.gif
Ignore this one due to raid card.
http://techreport.com/r.x/intel-x25e...load-doom3.gif
http://techreport.com/r.x/intel-x25e...oad-farcry.gif
http://techreport.com/articles.x/16291/1
Why do you think the E is a waste of money? Do you mean in terms of benefit for desktop use? The E & the M use the same controller but the E is tuned differently. The different tuning of the controller and use of SLC allows much better sustained writes and heavy use with a minuscule performance penalty to the M in typical desktop use.
probably because in "normal use" no one would write so much that 2x X-25M's would be overwhelmed, much less 2x X25-E's and the M's have the benefit of being slightly faster in reads I believe and giving you much more storage.
The E is a waste of money for the OPs usage patterns.
The E only makes sense when used in DB servers that are 24/7 pounded with requests and the like, where their superior write performance, IOPS rating and lifespan will pay off.
For standard or even extensive desktop use, the M will give you so much more for your money.
Also what's causing the reduction in OS/App related performance?
a) Lower small file read's that happens when you raid
b) Raid Card latency
c) Combination of above
d) Symptom of varying performance degradation on each of the drives (Always having to wait for the slowest drive)
e) Something else entirely
May I add those bootup results are total crap IMO?
If I compare bootup times of my Velociraptor and a single Supertalent Ultradrive, the Ultradrive is like 15-20 seconds faster loading Server 08 + all tray apps (Catalyst, AV, etc). Now the X25-M is at least as good as Indilinx-based drives, usually a bit faster.
is not time for ssd..... for now....
The boot up time of the x25-5 in raid took into account the boot initialisation of the 5405. It's not really testing how fast the X25-e is in this regard, but how slow the 5405 is at initialising.
"Ignore this one, folks. Our RAID setup may take more than a minute longer to boot than the rest, but it's also the only configuration that has to initialize the Adaptec RAID card, which takes its sweet time booting up."
How often do you boot your system a day?
Is booting your hobby? :D
Iīm not interestet in boot-time-benches.
@ tekjunkie
Try the "Ms" of Intel or Vertex/Supertalent UD ME
@ Xope_Poquar
14x 1TB-HDDs? To high consumption!
The 6x Samsung F1 of my Raid 6 consumes ~ 35W
The X25-Ms have been shipped, should be here tomorrow.It should be interesting to see them on ICH10R Vs ARC-1231ML.
TR storage reviews are totally cr@p. I had hints before and the proof came with their ANS-9010 review in which they didn't even enable the ICH's caching option and gave RAID0 results lower than a single drive.
I just don't trust them anymore and therefore not even checking at their reviews.
The Areca ist faster. See my comparison (Areca vs. Onboard)
http://www.xtremesystems.org/forums/...&postcount=255
When you set the stripesize to 4k there is a great difference of performance between the Areca an Onboard ;)
Btw: An extra controller gives you much more comfort (online-settings) and security.
@ Chosen.
Yes, the techreport-review is crap.
With many simultaneous acceesses every SSD collapse to a few of MB/s.
Not a Acard :D
http://www.madshrimps.be/?action=get...21&articID=935 (see Conclusive Thoughts)
I couldn't imagine spending that much money on SSD's. Just aint going to happen.
Usually? Never.
When I swap/test CPUs for stability though, it's 100x in a row easy. You can save a LOT of time there with 1-2 SSDs on the ICH10R, believe me.
Now as for Areca vs. onboard, the Areca will win overall especially at 4k random write/read like F.E.A.R. pointed out. Even though you probably won't "feel" much of a difference running most standard tasks.
As many X-25M's or Vertex's as I can afford.
Considering the price scaling seems to be linear (60gb = $150, 120gb = $300) I'd make my choice based on the raid card, if it's 4 ports, get 4x80gb X25 or 4x120gb Vertex, if it's 8 ports I'd get 8x60gb Vertex :drool:
IMO the X25-E aren't worth it at all
@ jcool, du Luxxer :D
Du kannst ruhig deutsch schreiben.
Wenn ich mein sys. teste (CPU oder Ram) nehm ich den Controller natürlich raus, sonst wird man ja irre :D
1500$ ? yeh ill be going for new ~6x slc ssd soon :)
f.e.a.r itll put your ramdrive to shame :rofl:
I think we need some real world tests because the only advantage I can see with a raid card is cache for 4k writes. What about reads? They usually out number writes! Also the x25-m doesn't need any help in that department anyway.
I think benchmarks with drives on raid controllers give unrealistic performance promises due to cache affecting the benchmark.
Does the cache improve read performance?
@ One_Hurts perhaps you could chime in, as I know you used to use hardware raid.
if you think cache is unrealistic/unimportant.. remove your cpu cache and then tell me cache is unrealistic
cache @ raid controller is just as important
may aswell remove your ram/cpu/mobo/os everything!
tell me.. what you left with???
goodness gracious theres so much baseless talking in the forum but little experience talking
ill tell you this.. adding the areca 1231 to my q9650 @ 4.6ghz is like adding 100+mhz to q9650 ;)
i know.. im too naive for sharing everything
look man i dont have the time to spend all day/night on forums why/how this why/how that.. im not here to correct everything.. just to balance things a bit :D
anyways im not here to push/force raid cards down anyones throats
happy with no raid? stay happy.. if my raid makes you sad.. my apologies :)
cache = efficiency
thats what i mean by: remove cpu/video/ssd/raid/os/everything
remove all that cache and what are you left with ?
reality?? but of course areca 1231 improves ssd efficiency
otherwise id be stupid to spend loads of $
But only for 2GB, than your cache is full of data. ;)
Let a Flash-SSD-Array run with many simultanious accesses and can see a big dropdown.
No Flash-SSD can beat the raw performance of DRAM-based SSDs (since 30 years) :p:
See Ramsan 440 or Violin 1010 :D
When you start a second run of a app (or whatever), you can read from the controller-cache. Max. 1600MB/s. is no problem for one controller.
here we go again with the cache :)
not when its bound to sata :p:
my only concern is whether youll believe the numbers ill post up :D
You canīt get more than ~ 800MB/s. without cache and 1600MB/s. with cache.
I still waiting for results with IOmeter/Workstation of your Flash-SSDs.
I donīt believe you can beat DRAM-based SSDs.
Note: The Acard 9010 have in fact SATA 2, but they are bounded with PCIe x8 between Areca and system :D
iometer doesnt correspond to overall system performance :shrug:
whats the point.. you guys dont believe me if i tell you that just a few hundred iops beat the crap out of thousands of iops :)
That is what I thought as it explains the benchmark results, so the first run is based purely off the drive's performance and the second run can be read the cache on the controller up to 1gb for example.
Let's say you open photoshop, then open a 1gb file, then close photoshop.
When you then go to open photoshop again does that mean that the performance is dependant on the drives again and not the cache?
You must config Iometer ;)
Try the workstation-pattern with 64 outstanding IOs (moderate load) or/and 256 outstanding IOs (heavy load)
Here you can find 4 patternīs http://www.bigupload.com/code.php?code=T8ECPRXWFH
And here the manual for a pattern http://ixbtlabs.com/articles/hddide2k1feb/iometer.html
Do you try the FC-Test? Is a real writing/copying-test with real files! ;)
Example: (itīs not a software-ramdisk!!!)
NOTE: Do run every pattern (5 patternīs) for one time!
http://www.abload.de/img/hc_614b6gg.jpgQuote:
create install - 0,796 s./ 724MB/s.
create iso - 2,013 s./ 818MB/s.
create mp3 - 1,389 s./ 736MB/s.
create prog - 4,4 s./ 323MB/s.
create win - 3,806 s./ 288MB/s.
When your last execution is a 1GB-Pic, your system is loading it from cache.
Superfetching of vista or win7 will helps too.
Loading of 25 apps in 8 seconds. Sorry for the bad quality. http://www.xtremesystems.org/forums/...2&postcount=72
you dont have to explain your benches to me.. but you do have to.. to nizzen :D
heres my best fctests so far @ arc1231/4x px ssd
http://img527.imageshack.us/img527/5838/90577133.png
http://img526.imageshack.us/img526/6514/95593394.png
http://img521.imageshack.us/img521/7769/91324190.png
http://img411.imageshack.us/img411/8056/24745490.png
http://img341.imageshack.us/img341/1170/68672894.png
i just noticed your edit.. ive always run benches all benches one at a time.. come on man!
And now IOmeter :)
for the sake of your ramdrive i will not :)
Hehe.
Iīm waiting for PRAM-SSD, but they will not released in 2009 :(
btw:
Do you have single-drive-results of your PX-SSDs (only onboard).
I donīt know about the PX-Performance.
In europe we canīt buy the Supertalent PX. (i donīt know why)
i only single drive benched (hdtach) when i found the diff between serial # R and P
well you can bet that without 1231 it wouldnt be pretty
@ 1231
px Rxxxx = 148mb/s 0.2ms
px Pxxxx = 188mb/s 0.1ms
i got 4x px Pxxxx
PX #Rxxxx old and #Pxxx new?
probably.. and firm update didnt do anything to Rxxxx still 0.2ms
Hm...
The PX #Rxxx looks like a MLC (accesstime)
Interesting debate. Would it be fair to say this for an Intel SSD/ hard raid set up?
Intel drives ramp up quickly to reach peak write/ read transfer rate efficiency at around 8K/16k, which then levels out.
Hard raid is not used to targeting high-speed random data in small chunks, consequently the peak write/ read transfer rate efficiency is pushed out to 64k before it peaks and levels out, however the cache covers this shortfall in performance of the ssd.
In other words hard raid lowers peak write/ read transfer rate efficiency at typical OS workloads, but the cache makes up this shortfall and provides a positive advantage over 64k transfer rates.
yeh at first i thought i got mlc at the price of slc.. but the write performance is higher than that of their ox ssd (mlc)
let me make it clear.. "it wouldnt be pretty without 1231" i was not referring to the cache.. whether its 512mb or 2gb cache the 4x ssd perform @ max
Hereīs a Stripping of 2 Arecaīs (ARC-1261D-ML + ARC-1210) with Software-Raid (Vista x64) and 1 Acard 9010 on one of the Arecaīs.
The right pic shows 2x Acard 9010 with 1 Areca (ARC-1261D-ML)
Workstation-Pattern with 256 outstanding IOs (heavy load) :up:
http://www.abload.de/img/2arecas2ttg2.jpg
2x Acard 9010 with 1 Areca (ARC-1261D-ML)
http://www.abload.de/img/hc_597sokk.jpg
forget iometer.. let me see fctests side by side
Ok, the ARC-1210 limits this array. Itīs not faster than 2x Acard @ the ARC-1261.
I must use 2x ARC-1261 (or 1231/1280)
And LVM2 with linux is the better choice. Linux has more hardware-proximity (is this the right word? :D) to the Arecaīs than Vista
so much for iometer..
exactly.. i saw your other post.. 1x 1261/ramdrive is still faster than soft/multi controller
you need 2x or more of the same controller to get smtg worthy in return @ soft raid
Napalm...please bare with me ;)
The benchmarks below are flawed as a direct comparison, different IOmeter tests and the intel benchmark is based on the M (lol).....but I think the M & E are similar in the way they ramp up quickly to reach peak write/ read transfer rate efficiency before they level out. You can see on the Intel chart the M peaks at 4K before it evens out at around 75 mb/s, roughly the same value for the Arc benchmark. 2kb on the other hand is about 50% slower on the Arc benchmark, but anything above 4kb and the Arc blows the M out of the water.
The E’s on Arc are also not peaking until between 32kb & 128kb depending on the array configuration .......so the question I am asking is this; is hard raid a bottleneck for SSD on small reads/writes, which is an area that has never before been a target area of optimisation on hard raid because of HDD limitations?
http://www.audienceofone.co.uk/c.jpg
Edit: Arc 1231 IOmeter settings: (Taken from here )
Iometer Settings
# of Outstanding
I/Os 16 per target
Burst Length 16 I/Os
Volume Size 10 GB
Intel X25-M IOmeter settings
Queue depth 32 across 10% disk span
^ i asked areca back in 2006 about that.. its gonna take whole lot of us to convince them..
@ audienceofone - sorry about that :)
well i see 5mb diff between 1x x25 and 5x x25 @ 4k
if 1231 would be a bottleneck @ 4k then 5x x25 should be 5mb under 1x x25
x25 controller is programed for very rapid rampup
1231 controller is programmed for a more linear rampup
if areca programs the 1231 for rapid rampup like that.. 2k would be no different than 4k
now.. is it just programing? is it also capability of the controller? idk im not the maker these controllers..
one things for sure.. cache definitely helps the controller at being more efficient.. the same way cache helps a cpu at being more efficient
how much cache? ultimately it comes down to the apps that run on these controllers.. some prefer more cache some less cache
heres the problem.. if efficiency prefers less cache then apps that prefer large cache get hurt.. same way large cache hurts small cache apps
it comes down to finding a balance to win/win situation
@ Napalm
What do you think?
Is a ARC-1680ix with the newest firmware the better choice (better than ARC-1231/1261/1280)?
Iīm looking for tests with the 1680ix + new FW + SSDs
But look (info from Arecaīs IOP348-controllers)...
IOP348-controllers simulate SATA with software? :confused:Quote:
For SAS controllers, SATA drives is not a first priority design purpose. It uses the software to simulate the SATA protocol. The compatibility between SAS and SATA is keeping on update now. For example, if you are using the newest Seagate 1TB drive, you have to use the newest drive firmware SN05 for better compatibility with Intel SAS processors. For some other SATA drives, disable SATA NCQ feature in controller setting is better choice to avoid drive abnormal failure. In other words, although most of these compatibility issues between SATA and SAS has solutions now, but Intel and disk vendor are co-work to support fully compatible.
^^
Thanks for that Napalm. I’m tempted to do some benchmark comparisons with a single X25 on hard raid and a single drive on the mobo to see for sure if it is the lack of ability of hard raid to fully utilise the quick ramp up of the X25 at 4k and less.
I’m also tempted to do the same thing for soft raid and a single drive so see what difference that has, if any.
It would be nice to have the IOMeter config that Areca/ Intel used. I’m also not sure how they extracted the results in the format that they used to display their results.
I say tempted as this will be a lot of work and I’m fairly sure anyway that the ramp up on hard raid is in fact slower. I think that what it shows is that a single drive at or below 4k is just as fast as a raid set up, maybe even faster, but anything above 4k will be faster on hard raid.
Assuming I’m right, the impact on typical real world usage patterns can be seen here.
(Taken from here )
http://www.audienceofone.co.uk/z.jpg
The next gen Intel will use a different controller and will have improvements in write speeds and a lot more capacity, so unless hard raid becomes more optimised for SSD I think I will switch over to a single drive when the new generation drives come out.
@ fear - thats what happens when you got 2 diff interfaces on the same controller.. one native the other has to be simulated
1680 better choice if sas ssd/ramdrive
1231-80 better choice if sata ssd/ramdrive
q is where are the sas ssds and sas ramdrives ?
just the last few weeks the market got flooded with new sas controllers but no sas ssd/ramdrive
---
@ audienceofone - yeh do that and see for yourself.. ill post 1x @ onboard/1x @ 1231/4x @ 1231 of my px ssd aswell
^^
Test with IOmeter? If so why don't we both use the same test config to compare results? It would be nice to be able to present the results in the sameway they have..... do you know how Areca/ Intel extracted the write speeds for each size file?
xls from Areca (Supertalent + ARC-1280 / 1680ix-8 / 1680ix-24)
http://www.picdo.net/Fichiers/3625b2...t_SDD_8HDD.xls
Note: There are 8 tests inside the xls (see below)
Here... ;)
I hope your money-bag is big :D
http://www.stec-inc.com/interface/sas.php
But the best linkage is fibre-channel (like Ramsan 440 or Violin 1010)
http://www.storagesearch.com/ssd-fastest.html
i got these here now if you want a quick compare heres crystalmark.. iometer later
1000MB - 1x @ ich10r/4x raid0 @ 1231/512mb
http://img39.imageshack.us/img39/7444/19601946.pnghttp://img34.imageshack.us/img34/5062/89851246.png
100MB - 1x @ ich10r/4x raid0 @ ich10r
http://img35.imageshack.us/img35/5222/91441038.pnghttp://img41.imageshack.us/img41/631...telcrystal.jpg
the 4x @ ich10r was prior to firm update.. post firm update its about ~20mb/s read/~10mb/s write
i got to redo 4x @ ich10r this is just to show/give an idea how my px ssd perform @ onboard/1231
and thats 800mhz iop vs dual-core 1200mhz iop :)
still i wouldnt go for the 1680.. due to latency.. overall its no faster than the 1280
if you want to give the 1680 a try go ahead.. maybe youll get more out of the ramdrives
nice.. maybe bill gates would be generous enough to grant us some $
1680 is for real raid arrays, 5, 6, 50, 60, etc. It's an enterprise level controller with the latest intel raid 6 engine. That's its primary purpose, large parity raids. If data integrity combined with performance is your goal, it's the superior choice.
Found in a french forum...
http://www.nokytech.net/forum/3111559-post336.html
http://www.youtube.com/watch?v=VNPPRhPV7y0
lol @ first linky - forget it.. acard 9010 ftw!
I definitely wouldn't knock anyone for spending money on their toys, but what are the tangible benefits of having uber fast arrays like napalm's and what type of workloads benefit from that type of a setup?
I am getting some interesting results when I pit my Intel X25-M raid 0 array against the OCZ core V2 Raid 0 array(both 64K stripe size). The intel array losses by a bit in read benches but there is absolutely no contest when it comes to writes. This was what I expected it to be, but what really is interesting are the ATTO results.The Intel array seems to win in both reads and writes by a fair margin when the file size is smaller and the Core V2 array wins for larger file sizes and Crystal also confirms this.It looks like I got some work ahead of me trying to find the right stripe size, for my kind of use, for my primary rig.Also I got to decide on the right combo(Ofcourse the OCZs on the ICH10R Raid 0 would be stayed away from,atleast for my primary rig).Since I do a bit of video encoding, I want keep my stripe size to 64K or 128K. Interesting times ahead.