... over @ Anandtech
Printable View
... over @ Anandtech
So much for any faith I had in the anand storage bench.
why would you say that?
is your drive not at the top?
Because 2x X25-V in R0 can not be near 2x faster than a single X25-M. They should be faster, but not 2x, which is absurd.
As soon as I see a large inconsistency like that, the whole benchmark loses validity in my mind.
Edit: to expand, the numbers don't make any sense. Specifically on the heavy trace. A lot of what they do is dependent on sequential writes, which is why we see drives high in that attribute near the top. The X25-M does poorly in comparison to other MLCs that do offer high seq writes, which makes sense. However, the fact that their 80MB/s write speed raid is suddenly 2x faster than a single X25-M which is also 80MB/s makes zero sense whatsoever. The extra read speed of the raid should help, but nowhere near 2x. They either messed something up in this particular benchmark or the whole thing is crap.
Edit2: to expand further, comparing both setups:
Seq read - 370 vs 255
Seq write - 80 vs 80
4kb r write - 60 vs 40
4kb r read - 60 vs 60
The controllers they are using (X25-V and X25-M) are very similar so the speed of reading + writing at once (also very important for the heavy trace, in fact this is why X25-E is the top drive) should be similar.
With that in mind, it does not follow that their trace should spit out 2x higher numbers in favor of the RAID. I would think around 25% higher would be appropriate.
Any test that mainly utilizes sequential reads, the two R0 drives would be twice as fast as an X-25m. Sequential reads are pretty linear in their scaling up to a certain point.
hmm strange. steve-ro thread here seems to indicate very close results to what they posted up on anandtech.
What test is the Intel X25 R0 2x faster than a single? Except sequential?
The aligned writes vs. regular writes seem so good on the new drives. How TF do they even top the reads on the same drive is strange. Could reads also need aligning...
But there is something funky going on with the heavy workload trace...
X-25-V single = 223
X-25-V raid0 = 838 ???
That's ~3.75x the iops, i.e. better than 100% scaling....
There must be some other bottleneck in the single drive scenario , that the R0 is alleviating
Probably influenced a lot by the cache. Also I think the increased sequential speed in R0 makes the big difference.
That does put weight on Hertz's statement.. i.e. he's right. I was looking at the entire article not the Anand Storage bench only.
yes there are a few inconsistencies, but overall they do look close. alot of stevero's testing mirrors that of anand, but, not having any of the hardware myself to test, i cannot say definitely how it is.
Comparing anand's iometer results to mine - some are close - a few are way off - for example 4KB random reads i get 229MBps (qdepth 64), anand gets 62.6MB/s (aligned) - anand commented that something looked bottlenecked - could also be a difference in qdepth.
That said, I agree with anand's general conclusion and the great idea of leaving spare space to (reduce write amplification) make up for the absence of trim.
I just bot 2x kingston from amazon for $75 each - will convert to intel x25-v's, for $150 bucks hard to beat! - http://www.overclock.net/ssd/656984-...-40gb-ssd.html
But how is it possible - even theoretically - to get 3.75x the IOPS by adding 1 drive? (again looking at the heavy trace)
What am I missing here?
Even Anand's own sequential read/writes measurements show ~2x the performance , not 3.75x the performance , and the random reads/writes scale much less than 2x.
Even if all workloads were 100% sequential, wouldn't the theoretical maximum be ~2x ? Since there is a mix in the heavy trace, you would expect it to be <2x
I agree with the value in these drives, but one_hertz raised some good questions
I can't explain it (other than perhaps the ich10 raid controller or high qdepth or both)- but I was able to confirm ~230MBps random read - using both iometer and AS SSD -
http://img189.imageshack.us/img189/4...ch1orwrite.jpg
By looking at those benchmarks, are there any conclusions to be drawn about which drives would perform best as a database/web/file server? I.e., can you look at the heavy workload bench and say the top performers in that category will be the best?
The test takes up 30GB to be run, the Intel X25-V has only 37GB. There is no space for this kind test. The space is divided with 2 X25-V 74GB, 15GB each, with spare space to write to it, performance tops up once again.
Anand describes it better,
Great testing done by Anand Lai Shimpi once again.Quote:
Our storage bench is based on a ~34GB image, which doesn't leave much room for the 40GB X25-V to keep write amplification under control. With two our total capacity is 74.5GB, which is more than enough for this short workload. With the capacity cap removed, the X25-Vs can scale very well. Not nearly twice the performance of an X25-M G2, but much faster than a single drive from Intel.
this makes me want to consider selling my X25-M G2 and getting two V's.
well, going to the V's it would only be about $30 out of pocket whereas buying another M G2 would be another $220.
If you're going to do that, should make sure to check Newegg early tomorrow. Based on the Shell Shocker preview, the V drive should be the earlier one
[QUOTE=Metroid;4316971]The test takes up 30GB to be run, the Intel X25-V has only 37GB. There is no space for this kind test. The space is divided with 2 X25-V 74GB, 15GB each, with spare space to write to it, performance tops up once again.
Anand describes it better,
...SNIP...
[QUOTE]
Thanks Metroid, that explains the scaling single drive vs. R0, and was the bottleneck I was looking for. :up: I should have read more closely! DUH!
Cheers
well the anandtech bench holds true. i remember when i was reading steve's thread how impressed i was with the performance of those drives. or the equivalent, whatever.
very impressive. 84 bucks with free shipping? wow.
I think he addresses this in his explanation of the benchmark results:
The performance scaling is more than perfect, but that's a side effect of the increase in capacity. Remember that Intel's controller uses any available space on the SSD as spare area to keep write amplification at a minimum. Our storage bench is based on a ~34GB image, which doesn't leave much room for the 40GB X25-V to keep write amplification under control. With two our total capacity is 74.5GB, which is more than enough for this short workload. With the capacity cap removed, the X25-Vs can scale very well. Not nearly twice the performance of an X25-M G2, but much faster than a single drive from Intel
one_hertz,
you know, saying a thing like that about anand, is going offlline IMO,
the guy is doing SUCH a great work, he put so much effort in explaining, adding images, talking with people, it leaves u speechless.
he answers e-mails, even if u bug him, even if u don't show phenomenal knowledge.
anand's reviews can be literally trusted almost blindfold,
he AFAIK, does the best job there is out there, and doubtfully if anyone can say other,
and many people owe him a lot.
u should read his SSD Relapse article if u haven't, see what he did with indlinex and they're barefoot controller for the vertex line,
he's work, is a work of a master,
u read it, and u just thank you know who for that the guy is out there.
the most extreme to think at these situation, is either they missed something, or they have made some mistake,
they won't cheat and won't play games,
doesn't seem like them..
as for the article,
great ending with that trim needed point,
shows the beautiful work intel did with they're controller,
heads up for this site,
they got a new sweet design too now.
I’m not knocking Anandtech, however in my view the Storage Benchmarks is a bit OTT. That is not a criticism per say as it highlights the fact that you need to be extreme to show a performance difference in ssd’s. Whilst this helps differentiate faster from slower ssd’s I don’t think it helps an average user understand what would be faster in normal use. In that context the tests are a little misleading.
I think the queue depths that Anandtech use are based on what might occur with a hard drive not a solid state drive and the 30GB test file is just way to big.
With regards to raid, the comments in the review reveal more than the article. (Thanks to GullLars)
Just my 2 cents.
One bit I don’t understand is the new 4k aligned test. I understand drive alignment but why align a test? :confused:
You can even use TRIM with 2x X25-V Raid0 now, can't you? Looks like a better purchase than a single X25-M in this case. :yepp:
unfortunately no TRIM, see review.
What about this, then? And yeah, I read the review, or I wouldn't have asked.
That is silly. Nobody is to be trusted blindfolded. Everyone makes mistakes.
This is what I was saying. I never said that this was done on purpose.Quote:
the most extreme to think at these situation, is either they missed something, or they have made some mistake,
they won't cheat and won't play games,
doesn't seem like them..
yeah but if you look at my post I'm not sure he deliberately/accidentally missed anything
http://www.xtremesystems.org/forums/...33#post4317933
$98.99 with free shipping... not bad :)
Now who wants to grab 4 for me and ship them to the UK?
;)
I've not used this company before but hey £71 :cool:
http://www.kikatek.com/product_info....source=froogle
Your post is not addressing anything that I have written. I was comparing an X25-M to an X25-V.
The really simple version of what I am saying:
X25-M is faster than X25-V (it does have more channels/chips after all), therefore two X25-Vs in R0 can not be 91% faster (heavy trace) or 95% faster (gaming trace) than a single X25-M.
Can 1 x X25-M divides its chips to each SATA port or Can 1 x X25-M raid itself? yes it is not possible.
RAID may achieve 2x performance. This is a basic perspective of the technology, like Crossfire or SLI. So in theory 2 x X25-V = 200% which in practice or real world usage is not true, although some benchmarks may achieve that.
Kingston 40GB SSDNow V Series Solid State Drive has already been declared end of life. You could buy the X25-V from Scan, had an offer of £88 tax included.
Very true but they are currently available for £66 across the water +del :)
Ultimate aim: SSD array @ £50 per drive ;)
Yes, buying 4 from there would save you almost £100. It is worth, you need 2 people as Newegg limited only 2 items per customer.
Quote:
That is silly. Nobody is to be trusted blindfolded. Everyone makes mistakes.
that's why the almost is added, if it can be said, we all owe him some credit...Quote:
anand's reviews can be literally trusted almost blindfold.
though enough said on that, if u understand, i think, last post has made it clear :yepp:.
say, MM account would be hacked, and u get his posts saying some crappy stuff, would you say "huh this man is F up"? or u'll figure out something is wrong here..?Quote:
This is what I was saying. I never said that this was done on purpose.
not trying to create something over it, but i think we should dearly appreciate what he's doing..,
or check few times what were saying .. :up:.
enough of that.
Hmm that's strange...
Anand himself told me that he addresses the concerns in your post in his explanation of the benchmark results:
The performance scaling is more than perfect, but that's a side effect of the increase in capacity. Remember that Intel's controller uses any available space on the SSD as spare area to keep write amplification at a minimum. Our storage bench is based on a ~34GB image, which doesn't leave much room for the 40GB X25-V to keep write amplification under control. With two our total capacity is 74.5GB, which is more than enough for this short workload. With the capacity cap removed, the X25-Vs can scale very well. Not nearly twice the performance of an X25-M G2, but much faster than a single drive from Intel
Perhaps he too is not understanding what you were trying to say...
Well you said yourself "91% faster (heavy trace) or 95% faster (gaming trace)", the "heavy trace" can be attributed to the hard drive space bottleneck, the "gaming trace" can be attributed to the 2x read speed and also some bottleneck one X25-V may have. The results seems very accurate and clear, perhaps you could email Anand Lai Shimpi and ask him about it.
If for you I still can not understand anything you are trying to say then disregard my post.
Good choice :)
I am not talking about performance scaling. The over 100% scaling in the heavy trace due to the sizing of the image and limitations of flash technology makes perfect sense to me.
I am simply comparing the performance of the X25-V R0 to a single X25-M and saying that the gap between those two particular storage systems can not be anywhere near its current size.
^I don't get it either. You only start to get a scaling benefit above qd4 and raid does not improve access time so I don't really see why there is such a variance in the Storage Benches. Its hard to believe that the test file size made such a difference. The PC Mark benches seem right but the storage bench doesn't. :confused:
EDIT: looking at the Gaming Workload bench if that is mostly sequential.....
X25M 160GB sequential reads = 250, Sequential writes = 100.
X25V 40GB sequential reads = 170, Sequential writes = 35.
...answers on a postcard..... :shrug:
hmmm the funny thing is that other peoples independent testing has verified for the most part the gist of what anand is saying. quit beating your head against a wall here man. it is what it is. ]
even intel engineers have acknowledged this from what i have read elsewhere.
I think we all agree that these drives are a really option to get phenomenal performance at a great price and that is all that really matters I guess. :up:
If these things hit £50 I will get 4 for an on board r0 array just for the hell of it.
But.....I still have one thing that I don't understand. I see OCZ getting very different results when the LE is tested with IOmeter using a 4K offset. Next I see Anandtech introduce a new benchmark using a 4k offset and getting really different results.
Why is the test being off set by 4k and how is that changing the score so dramatically? (I don't really get the Anandtech explanation) Anyone? :shrug:
The tests are done on XP it seems or at least an XP aligned partition - i.e. the partition starts at sector 63 - so it is aligned on 512-byte boundary but NOT on 4096-byte boundary. Thus it hits extra pages.
If you have a sequential write, you won't notice this much, because whether you hit 1MB or 1MB+8KB is not that big of a difference. But in case of 4K random writes, instead of writing 4KB pages, it writes to 2 pages. Because the partition LBA offset is 4096-512, not 4096-aligned.
Intel has an algorithm that either detects the partition alignment, or waits for a few reads/writes, figures they are not aligned and then automatically underuses the first 64 sectors (i.e. does not use 512 bytes of the first 4096-page of flash).
Not sure if I helped or made this even worse now though :D But it makes perfect sense to me why C300 shines in the aligned test.
I wish he tested the aligned reads as well. I think random reads would show a big difference also.
BTW, I think X25-V s***s bigtime due to the imposed limited seq. write. I have 4 x X25-Ms and it is boring to wait for stuff to be copied to it from the HDD R6 array. It takes half the time to do it on the X25-Es. The even more annoying thing is, while I copy to the X25-M array, I can barely access it (its speed is saturated). It feels like a bloody laptop 5400rpm drive :D And we are talking a RAID0 of 4 best consumer SSDs here ;) Now imagine what it would feel to have X25-Vs instead of Ms..
It has its place, but it's not that great for RAID arrays, IMO - especially if you have something that can saturate it. And it takes 2 regular HDDs to saturate 4 X25-Vs with writing.
Thanks for the insights. What confuses me is that Anandtech seem to imply this is a post XP issue: "Our 4K aligned test, more indicative of random write performance under newer OSes puts a damper on the excitement" The Intel performance gets knocked back quite a bit in this test. If the drive is aligned why would an OS (post XP) need a 4k offset in the test pattern to replicate indicative performance?
Alinged reads are almost identical to unaligned, the penalty lies in writing across two 4K pages vs just writing aligned to a single aligned 4K page.
What happens is real life? Do writes typically cross 4k pages? If so that would imply that testing to make sure they do align is not really representative, or is it the other way around?
As long as the SSD is aligned properly all writes takes place at aligned offsets. (using Vista and W7)
Not all writes are at 4KB of course but the writes are a multiple of 4KB e.g. 64KB or 512KB and so on.
Browse this article about 4KB alignment, there's also a bit of background stuff about XP and Vista/W7.
Link to Anandtech "The 4K Sector Transition Begins"
Thanks Anvil, but I still don't get why Anandtech are saying "Our 4K aligned test, more indicative of random write performance under newer OSes" I can see if you can manage to manipulate a 4k write to fall exactly in a 4k sector that writes would be a lot faster, but I can't imagine that that happens in real life (very often).
No, the 4K aligned writes are what happens on post-XP OSes, because they align the partitions on 4096-byte boundary, they do not put it on the 63rd sector (4096-512 boundary).
IOW, if the partition is on the 63rd sector, and you write 4K to the start of the partitioo (0 LBA), the SSD needs to write the page that covers sector 63 and the page that covers sectors 64-71 (8 total). That's 2 pages instead of one.
On post-XP (4K aligned), partitions start at 64th or such sectors. So a 4K write and LBA 0 writes to only one page.
4K writes are almost always aligned on 4K partition offsets. For example ALL Windows cache manager operations have 4K aligned partition offsets.
The IOmeter 4K reads/writes are aligned to 4K partition offsets. Otherwise it would not be a 4K read/write, it would be several 512-byte reads/writes.
@Anvil, how do you know reads are not affected? if you need to read 2 pages instead of one, there is definitely a penalty. It's just a question of how big it is. I bet it's not negligible.
@audio, where do you see X25-V/M knocked back? They are at the same absolute performance level - in other words, unaffected. The other drives get a boost with aligned offsets.
It's not that the benchmark is bad. It's that Intel is the only drive that behaves differently (internally) with unaligned offsets. Others behave the same (internally), so they show better performance with expected (4K aligned) offsets.
@audienceofone
It does :)
Everything depends on the cluster size of course, the default cluster size (or allocation unit) is 4KB for modern OS'es, this is the smallest package of space to reserve for any given file.
If a file is smaller than the allocation unit the space that's left in that cluster is unavailable for other use.
If the file is larger than the allocaiton unit (even by 1 byte) a full allocation unit is required to store that extra byte.
If you have a .txt file and you change one letter in that document, the whole document is generally rewritten, not that single letter.
This is the case for HDD's as well, it's a file system thing.
This given, there is no need to manipulate anything to ensure that the writes are aligned.
The next generation of Intel drives (25nm) will probably be based on 8KB page sizes. This will probably lead to that 8KB allocation units can be a better option but again a multiple of 4KB.
@alfaunits
I know because I have tried.
I've been playing around with aligned read/writes for a long time, long before Anand wrote that article.
Ok thanks guys for explaining. :up:
That's cluster, not sector size. The "modern" OSes are just using 4K aligned partition offsets. They still use 512-byte sectors if the HDD is a 512-byte unit.
I thought you just used the wrong words, but the below shows you mean clusters and not sectors:
Quote:
this is the smallest package of space to reserve for any given file.
If a file is smaller than the allocation unit the space that's left in that sector is unavailable for other use.
So how does that count? C300 and Vertex LE just got out recently, that's the drives I am interested in - where the aligned writes make a huge difference.Quote:
@alfaunits
I know because I have tried.
I've been playing around with aligned read/writes for a long time, long before Anand wrote that article.
It should have been cluster not sector of course, thanks for the correction :)
I've got both the LE and the C300.
iometer wise there's no change (or very small) at reads, but as Anand's tests shows, writes increase a lot :), the workstation pattern shows improvements as well but the writes gains the most.
When I get the time I'll start a new thread with my results for the C300 and LE drives. (I'll try to gather the info this weekend)
It is strange how the random writes can go >100MB/s while reads can't reach >70MB/s (on C300).
It could be any combination of combined writes, caching or/and the controller and for the LE the compression algorithms, or merely the fact that they've somehow optimized for 4K writes?
I haven't tried other than 4KB writes, I'll give 8KB a try, both aligned and unaligned.
so it this crap?
http://www.tweaktown.com/articles/31...uide/index.htm
this was my understanding but the above seem contradict that
http://techreport.com/discussions.x/18653
I'm about to find out for myself.
I've downloaded the IRST 9.6 drivers and I'm about to install on the Intel's G2's this evening.
I hope it works but I'm not expecting anything, from what I've heard (and read) it will pass the TRIM command to non-raided drives only.
C300 doesn't have the compression.
Combining random writes cannot show such a high difference (sometimes you would better results, but considering the writes are random it would not be sustained overall).
Caching cannot improve random writes all the time - when you surpass the cache size, there is little it can help with. The cache size on SSDs is in the MBs range not GBs to affect 30GB test sizes. Plus it would show different speeds in different runs then.
I could be wrong though, but I still think it makes no sense that random writes are twice as fast as random reads.
It is dead easy to align a SSD (or array) when using XP so that it is also 4k write aligned. I'm not sure why Anandtech (and others) do not remind readers of this during reviews...
You just need to set up the first partition with Diskpar or equivalent before you install or restore the image of the OS.
Edit: I have been using IRST 9.6 since release with no issues.
Speed and access times are exactly the same (for me) as the most recent beta... (9.5.7)...
From the horses mouth:
Intel® RST 9.6 supports TRIM in AHCI and pass through modes for RAID. A bug has been submitted to change the string that indicates TRIM is supported on RAID volumes (0, 1, 5, 10). Intel is continuing to investigate the ability of providing TRIM support for all RAID volumes in a future release.
Announcement 2:
http://communities.intel.com/community/tech/solidstate
Announcement 2 doesn't work.
Look for the gold star at the top of the page. Select show details and then go to announcement 2.
Link to What features are supported on the 9.6
cut'n'paste from the feature table.
TRIM support in Windows 7* (in AHCI mode and in RAID mode for drives that are not part of a RAID volume ) YES YES YES YES YES YES YES YES YES YES YES YES YES :)
In this case YES means NO for drives configured IN raid but YES for drives that aren't part of a RAID.
(confusing, not really)
funny thing here guys. it has always supported trim in non-member drives. this has been a long advised tactic on several support forums, to switch it to raid, as it is just a very close form of achi.
people have been doing this for a while to avoid reinstalling or doing registry tweaks when they need to switch from IDE>
: raid mode uses achi.
thank-you, I have a special someone to silence in the comments section of anands x25-v raid review.
If you think your 4x X25-M's are too slow do you want to sell them to me :D
X25-V seems like a god-send to me for those wanting to RAID-0 with minimal cost.
can you please still try the new drivers on your G2's and let us know?
Thank-you.
Jalyst,
TRIM doesn't work on raid members.
I've tried the usual stuff to verify but it's just not there.
alfaunits was just reminding me that sequential write speeds are important for some and I agree that if you are spending a lot of time writing sequential files then yes the X25-V is a rubbish option. In all other scenarios however the price/ performance ratio of the V drive is a bargain.
The V drive is going to be great for a boot drive and most OS tasks. You could get one and try it out as I mentioned before. If you need something faster you could simply get another one and raid 0 it. This way you take no risk.
The M drives are also great. One drive = less aggravation. Set it up and forget about it. Again for most tasks you will not need anything faster.
To confuse you a bit more I have been playing with single drives using a low end raid card today and I can notice that without trim things seem a bit snappier, even though an AS SSD Bench run is well below that of an ICH10. Conclusion so far: TRIM takes the edge off the "snappyness" of an (Intel) SSD.
:D
Don't get me wrong, the Vs (Intel X25-V, not the alien Vs :P) has its place. But as soon as you have some medium that can sustain nice read speeds the X25-V even in RAID get oversaturated.
I can saturate 4xX25-Ms with a RAID6 HDD array - the X25-E array is sleeping 'till X25-M finishes the copy ;) That doesn't mean you will ever notice something like this (and it's not like people copy large stuff every 20 seconds..).
But unfortunately, when you do, you'd get more annoyed (and some might even get scared - thinking it stopped working) by it.
I often copy VM images to it and the last few days had to copy back/forth some OS images (non-VM) so I really noticed :( I didn't notice something like this for a decade almost on HDDs (had Raptors since they came out).
So it's funny ;)
Just as audio said: If you have a laptop that has 2 HDD slots, putting 2 Vs or two Ms will be a bliss. Scenarios that I talk about are not common - not many people have several HDDs and one SSD or several HDDs and SSDs.
Don't beat your head with it ;) I don't know squat about graphics really.. that's why i just look at the game scores I care about and the power consumption to choose.
Don't even get me started on sound cards - I wouldn't know how choose from those without listening to each :D
alfaunits,
I redid the 4KB aligned vs 512B aligned iometer tests on my 2R0 LE array.
I'm not sure of what was wrong in my earlier test or if it's just me :D
The results are crystal clear, 4KB aligned reads are gaining a lot on the LE's :)
I'll create that new thread for my benches.
Results for my somewhat degraded 2R0 LE 100GB's (QD64, 1GB filesize)
4KB aligned reads : 250-270MB/s (65'-70' iops)
512B aligned reads: ~170MB/s (43-44' iops)
The writes are still ahead though, ~355MB/s or ~91' iops.
Anvil, can't wait for that thread ;) Are those sequetial writes or what?? That's virtually seq. speed write speeds.
(on LE I can understand why writes are faster than reads, as compression helps, but C300 doesn't have compression - can you test C300s as well?)
For those wishing to offer opinions on this, it's prolly best posting in my original thread.
So as to avoid hijacking this one any more.... This post is also there.
I'm trying to decide between 2x X25-V, 1x X25-M 80GB, or 1x OWC Mercury 50GB (unlikely)
I'm almost settled on 2x X25-V's, but I'm having a hard time justifying their slightly higher cost.
It's hard to explain my workload in detail at this stage, all I know is it'll house....
*Win7 & some multimedia related apps
*Stripped-down Ubuntu + MythTV + XBMC/Boxee + maybe LXDE & NAS related s'ware.
It'll primarily be a media playback, media capture/transcode, & storage/bu device. i.e. HTPC/PVR & to a lesser extent, NAS.
There'll be at least one 1TB 7.2k drive for storage, & this is prolly also where captured DVB will be dumped.
LT I have plans for a dedicated NAS device with a more sophisticated array/config of HDD's.
I wonder how 2x X25-V performance compares "across the board" to these two drives & the Corsair F100? (100GB SF-1200)
Do the Anandtech bench tools allow one to see all this, or are there supplementary resources?*
And I wonder how their performance differences compare to their price differences??
I'm concerned my usage pattern will be random enough to accelerate degradation of 2x X25-V's in RAID-0.
Hence requiring me to set aside more space than I can spare to mitigate it...
To be safe, I'm pretty sure I'll need 60GB for both OS's, their apps, page file etc, but I'd be surprised if I need more than that.
Will I notice enough of an advantage in my workload to justify their higher cost & potentially higher rate of degradation/wear?
Sorry for the double-post, thought there might be users here not subscribed to the other thread that can help, thanks.
*Anand's X25-V RAID review seems to include Mercury 50GB & the X25-M in all the benches, but no Corsair F100.
*I'll need to analyse this data more carefully soon to help me towards a decision....
So GullLars has recommended 1x X25-V, 2x Kingston V+ G2 64GB, + my storage disks (which initially will only be 1 or 2 1TB HDD)
The idea being that the X25-V is the OS-SSD, & the RAID-0 of Kingston's is my scratch-disk with high sustained r/w.
Does everyone else agree with this basic concept (seems pretty sound to me), perhaps you agree with the topology but not the disks picked?
^^^jalyst you are double posting across threads bro
jalyst,
If your workload involves a lot of copy/deletes you're better off using TRIM.
Anyway, if you get the X25-V and the V+ G2 drives you can try both scenarios.
I have no experience using the V+ G2, only time will tell how much it degrades without TRIM.
(cleaning should be easy though)
Thank-you for offering your thoughts Anvil it is very much appreciated!
Anyone who wishes to do the same, it's prolly best posting in my original thread.
So as to avoid hijacking this one any more....
For anything involving my set-up, I'll endeavour to post only there from now on too.
What's the cheapest X25-V you can get in the US? It can be Kingston or w/e, I'd re-flash it anyway.
Considering picking up two of those instead of X25-M, but Newegg has x25-M 80GB for $215 while X25-V is $125, which makes $250 for two, quite a bit more expensive...
Sorry if that's OT, I'm not very good with US stores...
Edit: FFS, missed the 75$ deal it seems, damn it. :\
Wonder what the chances are of a 75$ deal coming up soon again... 60$ off is something I could use badly.
At that price, I would buy ~40 of X25-Vs for storage :D
Newegg had the X25-V for $98.99 last Friday.
we should start a dedicated, never-ending, thread on x25-v pricing :)
Postulating rougly 170MB/s read and 40MB/s write (these are conservative numbers) and not being bottlenecked by the controller, you can get 40*170MB/s = 6800MB/s read, and 40*40MB/s = 1600MB/s write.
In order to pull it off, you could use 2x LSI 9211-8i or 9260-8i with port expanders, but then you need to software RAID, since it doesn't support that large integrated RAIDs...
If you set it up so you use 2 x25-V pr physical SAS 6Gbps port on 2x 9211-8i, you get 2(cards)*8(SAS ports)*2(SSDs pr port) = 32 SSDs with 5450 MB/s read, 1300MB/s write, 950.000 4KB random read IOPS (wich would be limited to roughly 600K by the controllers), and 320.000 random write IOPS. The price of such a setup: 2*$250(?) + 32*$110 = ca $4000 for 1,2TB.
In order to get this level of performance, you would likely need a dedicated quadcore >3Ghz just to drive the software RAID. It could be doable on a dual socket mobo...
If i could make a copy-paste SSD of parts availible on the market, i would consider a LSI 9211-8i + 8 x25-V for 1400MB/s read, 320MB/s write, 80-90.000 4KB random read IOPS, and 80.000 4KB random read IOPS. (80-90K IOPS is the max of the LSI 92xx controller)
The price of such a setup could be 250 + 8x100(-130) = $1050(-1300) for 320GB, bootable storage using 1 PCIe 2.0 x8 slot.
Would any of you guys buy a 320GB SSD at $1000 with the performance numbers 1400MB/s read, 320MB/s write, 80-90K IOPS read, and 80K IOPS write?
Reducing storage to 250GiB would make performance degrading a non-issue.