It would be more realistic for the default of ASU to use 100% incompressible data.
Printable View
It would be more realistic for the default of ASU to use 100% incompressible data.
I think 0 fill should be illegal. You should get carted off to the gaol for using 0-fill.
I don't even run non-SF drives with 0 fill in the endurance test.
Attachment 125023Attachment 125024Attachment 125025
Hi guys,
Hopefully I have done this correctly, there should be 3 images attached. First off a great thank to Anvil for this great tool :clap:, that I now have to learn how to use...back in the cue mate!!! :cool:
Now if someone could write an optimisation script so that you could get all your drives up to speed that would be brilliant :up:
Can someone help me interpretate these images, eg. the number they represent and tell how I can improve on it, please
thanks
Henrik
Here a Result from my new Workstation :
http://www.bilder-upload.eu/thumb/76ebf5-1333524042.jpg
nice cache!! :-)
0 Fill won't be default for the next release.
100% incompressible is not normal though, except for media files. (there will always be a mix of compressible and incompressible)
I'll probably make an option where one can disable the continuous re-generating of random data. (it will be less cpu intensive and won't matter for drives that don't do compression)
@tived
I'll have a look at your tests, at first glance they do look normal based on your info.
That is incorrect. Most user day-to-day data will look close to 100% incompressible to Sandforce SSDs.
OS and program installs have data that can be significantly compressed by Sandforce, but most people only install those once, so it is not a good indication of day-to-day saved data, especially with the bigger SSDs.
I just think that when benchmarking, it's ridiculous to just show zero fill. It's principle more than anything.
That's why I like ASU -- I can just bench a SF with every compressibility level and then weight the results as I please. 47% to 67% are far more realistic an average than 0/100%, but 67% on SF is pretty much incompressible I think.
I'd have to double check, but I did break out a SF2281 the other day to upgrade some FW and do a SE. I was pleased with it's incompressible performance.
johnw,
It's not black and white.
i.e loading applications will result in reading compressible files, how much writes are affected depends on what type of files one are working with.
Databases takes "compression" to the extreme as most are highly compressible, I might end up Endurance testing the Force 3 I've still got using the database setting as it would make sense for my kind of usage.
I've still got the Vertex 3's running my VM's and one of these days I'll check how they have developed.
From what I've seen WA is well below 1.0. (based on the SMART data, how that translates to real WA is of course not known)
Mushkin Chronos Deluxe 120GB 5.02ABB0 Reference FW
Zero Fill
Attachment 125040
46%
Attachment 125039
Incompressible
Attachment 125038
AS-SSD
Attachment 125041
-----------------------------------
I think 46% should be the ASU default. Anything other than 0-fill, though. 67% is almost incompressible to SF -- surely whatever compression it can effect is offset by overhead from not-so-compressible data. While my personal experience is that much of the daily writes are frequently compressible to a degree, it's generally not enough to offset the larger incompressible writes. My workload generates an average WA of ~1.2ish, but my workload is hardly universal.
How was the 67% on the Mushkin.
I know from earlier tests (also confirmed by Vapor's tests) that there is not much difference from 46% to 100% on 2281 based drives, earlier SF controllers will suffer more and Async NAND will still make a difference for all SF based drives. (on current controllers)
Unless you want ASU to cater to Enterprise users, it would be a bad idea to base the defaults on database writes, since very few non-Enterprise users have a lot of database writes to their SSDs.
For typical non-Enterprise users, the best thing to use for a benchmark to correspond to day-to-day usage is 100% incompressible / random data. That should be the default. If you start arbitrarily choosing "randomness" of less than 100%, then your benchmark will be arbitrary and not suitable for objective comparisons. There is a reason why the SNIA tests specify random data. It is fine to have a choice, but the default should be 100% incompressible.
I didn't include the 67%, but it's just about the same as the incompressible. I swear, I never remembered this drive pulling more than 70mb/s QD1 4K RW in CDM, but here it is hitting 111MB/s in AS-SSD. Not too shabby.
Without a radical redesign of the SF/SF FW, I really think the next gen SF should go back to 28% OP and should probably remove RAISE. Just having a proper OP seems to really even out the sequential writes. Newer SFs have a funky waveform-like write pattern, a problem I don't think my Vertex LE 100 has.
---------------
Here is the 67%
Attachment 125042
Hmm. The last time I checked (last year) this drive was just about even with 100% and 67%, but I see now it looks closer to 47% than %100. That could be the 5.xx series FW at work. The writes are a good bit higher than they were on 3.xx FW.
The default won't be Database, I've just not decided.
100% would be worst case for the SF based drives and I'm not sure that it's fair vs other non compressing controllers, there is a portion of compressible data in any workload and real life tests show that SF drives are generally as fast and sometimes faster than most drives. (up till now that is)
So it will be in the range of 46-100%.
Where does SNIA say that random data means incompressible data?
Thanks, Anvil
Henrik
hmm, i am looking MSM, but I can for some reason not find where I can get into the properties of the controller and set that "write caching" on
Henrik
also, i am a bit disappointed with my performance on my bootdisk with 4x Intel 520 in RAID-0, only giving me 590's, would that be because its on the SATA-II controller?? this one has in windows write cache turned on, my two arrays on the M1015, I can't enable this.
Henrik
My personal belief is that the default should be either 46 or 67.
I think some folk are getting write cache confused.
Disk cache policy on/off is for the Disk drive cache (SDD or HDD) it's best to turn this off I found in RAID0 arrays.
You can find this in MSM under 'Logical', then the array you want to change the cache policy on. (right click on it)
A reboot is needed for this to take affect
This is the only cache policy available on the IBM M1015 !!
For any single drives on the M1015 you can change this in Device Manager (windoze)
With cached controllers you ofcourse get:
Read Policy, Read ahead on/off
Write policy, Write through, Write back and Write back with BBU
I/O policy, direct I/O or cached I/O
Plextor M3Pro 2x128 GB in raid0
http://img717.imageshack.us/img717/6430/anvilraid.jpg
http://img801.imageshack.us/img801/720/assdraid1.jpg
There are lots of applications that generate random test data at the application-level, this kind of data is normal data used during application testing.
Neat if one can't export/import current systems.
If you have a look at SNIA's specs they are testing with other patterns and are also debating "how random is random enough".
3.6 Data Patterns
All tests shall be run with a random data pattern. The Test Operator may execute additional
runs with non-random data patterns. If non-random data patterns are used, the Test Operator
must report the data pattern.
Note: Some SSS devices look for and optimize certain data patterns in the data payloads
written to the device. It is not feasible to test for all possible kinds of optimizations, which are
vendor specific and often market segment specific. The SSS TWG is still trying to characterize
“how random is random enough” with respect to data patterns."
I have read the SNIA SSS documents, no need to quote them to me, unless you have a point you are trying to make. I'm not sure what your point is here.
I think it is clear what is being referred to in the passage you quoted. If the data stream consists of repeated blocks of the same "random" data, then how large a block size is necessary in order to fool all SSS devices into thinking it is a continuous stream of truly random data? The answer obviously depends on the compression and de-duplication algorithms used by various SSS devices, so it is difficult for SNIA to come up with a universally applicable definition of a sufficiently random data stream. Nevertheless, it is obvious that if the data stream can be compressed significantly by a specific SSS device, then the data stream is not "random enough" to be used for the mandatory random data stream.
I'm no Shill for SF and SF's marketing, but I'd rather have a universally accepted SNIA spec than a fractured comittee. If you're looking at writes by volume, most are small compressible writes anyway. If you look at it by size, the larger the write, the less likely it's compressible. On average, I think the 46 to 67 percent encapsulates an average client workload. The host system is always writing small bits of data, while larger accesses are usually initiated by the user.
I'd like to see more transfer size and access patterns vs. data compressibility. The larger the transfer, the less random and less compressible it becomes.
If SNIA has any chance of getting the acceptance it deserves on client side storage, some concessions to SF will have to be made. But the more time I spend trying to understand SF, the more I come to terms with that. The truth is, the best SF drives are extremely competitive on speed. Latency will be a problem in some cases, but you do get some advantage even with 80% compressible data. After 46% SF really plateaus, but there is still an advantage there. Now, my time futzing with SF leads me to believe that there is more overhead than is user visible, perhaps enough to overcome the compression endurance advantage, but it could also be a case where more over provisioning would pay dividends as I've maintained for some time.
Now, this is entirely separate from steady state performance, but the more compressible the data, the longer the time to achieve steady state. I'll have to play around some more when I get home, but I also believe some of the housekeeping algorithms in 5.0 reference FW are different, but steady state performance will continue to be an area where improvement is needed. But some SF do have some generally desirable attributes above and beyond the obvious, like stellar 4k qd1 performance. That's not a SF exclusive trait, but it's one area of performance I prize.
You can babble all you want, but the fact is that the only objective test possible is using random data, as the SNIA SSS documents specify for the mandatory test. SNIA allows testers to use non-random data streams (as long as the data stream is specified) in addition to the mandatory random data stream test, but the random data stream is mandatory.
As for you're other vague claims, they are highly debatable (even evidence posted in these forums contradicts them), and have no place in an objective test specification such as SNIA SSS testing protocols.
Those are merely my own observations, but SNIA is an industry group which needs unity from its members.
Let me know when you get elected king of SNIA.
It is easier for some to set back and criticize others efforts, than it is for them to actually do anything to contribute.
I guess that could be some sort of nice saying;
It is easier to criticize than it is to do.
Happened to come across this and thought I would throw in my two cents. I don't agree at all with this statement and think, quite honestly, that such negates a very important piece of the pie when we speak of benchmark testing and it's relationship to computer use. In simple terms, the importance of testing in 0Fill, or highly compressible data, cannot be understated for the consumer side of things, just as testing in incompressible data (or random data samples) holds a more specific value for the enterprise side of things.
I can go back to the beginning of testing with this same argument and, quite honestly, would have believed the naysayers of testing in highly compressible data (OFill) would have seen the light by now. WE went head to head for years now with many berating the ideal that reviewers, myself included, would test in highly compressible data and show it's meaning and value on a review.
Imagine if you would how confused the consumer would be if we had never shown that side of things and explained the difference between the two.
Moving on, PCMark Vantage is recognized by all reviewers as being the 'industry standard' of consumer SSD testing and, well the simple facts show that the scoring realized through Vantage follows that of testing highly compressible data (OFill) much more accurately than testing that of incompressible data. Actually the new Vertex 4 is pretty much the icing on the cake with respect to an example on this one.
I know I may be going against the views of two very good friends on this one but, the truth is that, for the typical consumer, oFill (or testing with highly compressible data) is just as important as testing with incompressible data for more specific needs such as video and photography and even reaching right into the business and enterprise side of things.
To make a statement that says that you believe that testing with oFill data should be outlawed (colorful term) shows a very close minded attitude and really negates the entire side of the debate. Quite frankly, it goes so far as to even put the credibility of the person making the statement into jeopardy.
Just my thoughts! It is kind of amusing actually because I can probably pull up threads on this forum just over two years ago where I stood strong on this exact subject. To my advantage, test results show a very clear picture.
As for SandForce and their marketing, as much as many might not like hearing it, it borders on brilliance. There have been very few in technology (much less the storage industry) to make the steps that they did over the course of just under three years. They have become a part of every consumer SSD manufacturer today EXCEPT for Samsung and Crucial. Yup, that includes Intel. They were then purchase by LSI to top things off. Just how big would the line have been if they went public before the LSI purchase?
While I don't think 0-fill testing is completely pointless, I have the following points to make:
a) Sandforce drives do far too well on 0-fill testing .... almost no real world load has the same results. A sandforce with 64gig Async NAND does just as well as a sandforce drive with 64gig Toggle NAND in 0-fills. Real world situations start at 47% compressible and really just get less compressible from there.
b) Running a benchmark in 0-fill mode on a sandforce controller really reveals very little since you are really just speaking to the controller and barely touching the NAND behind it. It really isn't a very useful benchmark for exploring a sandforce SSD. If you want to see any differences, you need to look at less compressible datasets.
c) In my real world monitoring of sandforce drives, the average write amplification tends to suggest loads of between 47% and 67% compression. (Of course, during long periods of idling, I see quite a lot of NAND activity, which increases NAND writes but doesn't increase Host Writes much)
However this makes those cheap Sandforce drives with Async NAND viable. Most people can buy a dirt cheap async NAND 120gig, and it will actually perform very well for real world tasks in their system. Drives without compression using Async NAND can have pretty ugly performance stats.
0 fill is a worthless benchmark statistic unless you happen to be in marketing and would like to completely misrepresent the performance of your product.
The only benefit of running a 0 fill benchmark is that it enables an end user to mimic how marketing people came up with misleading performance statistics.
SNIA is the only benchmark that properly tests a SSD’s performance.
Edit: Disagree? Name one application that uses 0 fill data and then identify how it benefits from 60,000 IOPS.
Results from recently purchased 2x120 GB SanDisk Extreme SSD at 46%. I used that setting because I'm using them for OS/apps and when I apply the lightest of compression to a full backup, it comes out to 44.9% (and I pre-archive a lot of my stuff), so it is "realistic" for at least my usage. When the drives have some wear and tear, they score closer to 5400. Sorry for leaving out the drives on the image, I hadn't planned on posting this but thought what the heck. Still experimenting with these, the read scores are low on this particular run (usually score 100 points higher on read); is the OS drive after 500GB of writes in a few days, 64k stripe.
Attachment 125998
With SNIA, a random data stream is a mandatory part of the test, which makes a lot of sense. SNIA also allows additional tests with non-random data streams (which must be reported by the tester), but the random stream is still mandatory.
Clearly, random data streams should be the default for any test. If someone is testing an SSD that has the ability to do compression, they may want to add some non-random data streams (and report in detail what they used), but they should ALWAYS include a random data stream.
SNIA got this exactly right with their SSS enterprise AND consumer ("client") tests -- a random data stream is mandatory for both enterprise AND consumer tests.
http://www.snia.org/tech_activities/..._standards/pts
And I think exactly this thought process would negate the purpose of testing which is to explore the performance in all typical environments. Of course enthusiasts such as yourself want the 'incompressible data' testing first and foremost but the truth is that compressible data is utilized just as incompressible data is and, in fact, many would say more so in the typical user experience. To state on one side that 0 Fill (or testing with highly compressible data) is a worthless benchmark is paramount to stating that testing in 100% incompressible data is just as useless.
Effective testing explores all the variables.
So how does one go about testing an SSD to SNIA standards. I mean, it is great to have standard and the SNIA tests look fairly comprehensive ... but without a means of carrying out the tests, and with nobody doing these tests, it is all a bit academic at the moment.
But I do feel that for at least sandforce drives, all compression levels are worth looking at, because all will appear in typical workloads. Few workloads will persistantly and consistantly present an incompressable or 0-fill load, so it is good to see how performance graduates between fully compressable and fully incompressible.
If "many" would say that, then "many" would be wrong. You are arguing with Ao1 who looked in depth at that very question.
Data compressible by Sandforce controllers is relatively rare for most users day-to-day SSD writes. About the only commonly compressible data is OS and program installs (not day-to-day things for most users, just very occasional), and database and VM applications, which if those are in use, they are usually power users and are well aware of the compressibility of their data. Most users do not run large databases or VMs.
John W. my old friend... So what you are saying is that, according to Ao1, the typical user utilizes incompressible data more often in typical things such as ...ohhhh I don't know.... system starting, system software such as explorer and e-mail use and even MS Word file creation?
Whilst I believe compressiblity for client applications is limited my statement was based on 0 fill. I am always prepared to be enlightened if you can tell me how an end user benefits from 0 fill. Name one application in which it is relevant and in which the IOPS are utilised and I will change my view.
First, a sincere apology to Anvil for derailing his thread and detracting from ASU, which is a great benchmark for end users, providing flexibility and ease of use.
The SNIA tests are beyond an end users ability to undertake, but I believe this is the benchmark that vendors should use for their specifications. The benchmark is something that all major SSD vendors have contributed towards and it provides granularity and comparative performance assessments that are beyond any other method of testing.
Here is a shot of 17 drives that were tested with the SNIA specification using 65% reads/ 35% writes. (17 SSD’s and one Enterprise HDD [edit: in yellow]) It is clear to see that there is a significant difference in performance between SSD’s.
Attachment 126059
http://www.brighttalk.com/webcast/23848%20
Here is a shot of a Sandforce drive. Blue is incompressible. Red is a data base pattern and the green line is 0 fill. Interestingly 0 fill is close to the data base load, but the max IOPS come out at ~35K. Sandforce specify 60,000 burst/20,000 sustained (@4K blocks) for the SF2x drives and 30,000 burst/10,000 sustained (@4K blocks) for SF1x drives.
Attachment 126060
Sandforce don’t state how they arrived at their specification figures, but presumably they were obtained on a FOB drive using 0 fill. The SNIA test is based on steady state, which is the representative condition of a drive in use.
To prevent Anvil’s thread from being further derailed there should be a separate thread to discuss SNIA. There is already a thread to discuss SF compression.
here are results with incompressible and compressible data with three different drive states, FOB, Steady State, and Overprovisioned with SandForce enterprise class drives:
http://thessdreview.com/our-reviews/...-ssd-review/4/
http://thessdreview.com/our-reviews/...-ssd-review/4/
Corsair P256 Pro on Sata2
http://www.abload.de/img/mfauuiro.png
http://www.abload.de/img/mfauuiro.png
It is not a problem at all, as long as it's conducted in a civilized manner :)
The next or subsequent beta will include an option to use real-life data when testing, One_Hertz gave me the idea some time ago and I've been testing a lot of configurations (drives/controllers) the last few weeks, looking good so far.
Here's my results with 8 Intel 520 180GB SSD in RAID0 with 64KB stripe size. Areca ARC 1880ix-24, 4GB Unigen DIMM, Battery Backup.
First is zero fill, second is 100% incompressible. The hit isn't too bad considering it's a Sandforce controller. This is a 32GB test as well since the 1GB tests are heavily influenced by the controller's cache. 1GB scores are over 14,000!
http://i157.photobucket.com/albums/t...20429-1036.png
http://i157.photobucket.com/albums/t...20429-1040.png
67% PlextorM3 Pro Raid0 64kb stripe
http://i.imgur.com/2UBAx.jpg
looking good freak!
8X Samsung 256GB RAID0 on Areca ARC1880-IX 24 4GB DIMM Battery. Incompressible test run. (compressible shows same) 32GB size.
http://i157.photobucket.com/albums/t...20508-1809.png
That's some sexy stuff, Rubicon.
Single V4 256GB on Marvell 9182 Controller, using latest Marvell AHCI driver, default settings in Anvil Utility:
http://dl.dropbox.com/u/4008284/Benc...ellDriver2.PNG
It obviously comes short of Intel 6G, but seems to be among the very best of Marvell SATA 6G options.
Regards,
tweak
9182 is not bad at all, much better than the 9128 :) and the best I've seen on 1366 as a "supplemental" controller. (I think it's on the MIVE as well, Z68 that is)
What firmware are you running on the V4?
That is the 1.4RC running, as the write sequentials suggests.
PS: I managed to revive the V4 last night (much hassle), but one other drive connected to the 9182 (Vertex 3) took a dive at the same time and is still "unconscious". I will look at reviving it tonight. However, this suggests that there was something going on at controller level when the dropout happened, and not necessarily something that has to do with TRIM being disabled as I first thought in the PM I sent you. Will investigate further when my daily load at Nextron is done.
Regards,
tweak
OK, so that's where we've "met" :)
(can't remember if we talked or just e-mailed about some RAID controllers I ordered from Nextron :))
I can't remember either. I play sweeper on the Nextron team, you see. I am a technician, building/repairing/installing systems and onsite install/repair, but also on sales and some website/webshop stuff. Hence they demanded I put Sales Engineer on my business card. These things taken into account, along with my horrendous short term memory (the cache just flushes to /dev/null), I rarely remember not-so-regular customers. No offense to you. I'm convinced that if you said "This is Anvil", I would have no trouble remembering it (it's funny how incidents where you $hit your pants seem to stick to memory). Such information would be stored with so much redundancy and replicated over so many racks and datacenters in my brain that it would be stuck solid.
Enough off topic about me...
Regards,
tweak
updated orom bios and overclocked cpu even further to 4377MHz (515*8,5) and finally managed to break 30MB/s 4K reads on ASSD and ASU score upped a bit as well :)
http://www.abload.de/img/mfa_corsair_performanwlca7.png
If you are feeling adventurous you can always overclock PCIE.
We performed a few (quite a few) tests a few years back (2010?) and I ended up with a huge PCIE OC on a GigaByte X58-UD7. (that is Socket 1366 not 775)
(the thread is here in the storage section somewhere, it could have been in my "C300 vs Vertex LE" thread)
a few days ago i tried to run pcie at 110MHz but it doesnt boot at all.. i could make it work at 105MHz but unfortunately i didnt see any changes on atto test.. i have a hdd raid array too maybe it breaks something.. i gonna try to remove hdds and try again..
gonna check the thread thank you..
edit: oh i guess you were saying that to tweakr cause he has a onboard pcie controller, lol
Kaktus,
It was meant for you!
OCing the PCIe can lead to a lot of issues (typically raid controllers and possibly the video card), I ended up at >115 from what I recall.
Don't let that put you off from testing more yourself. The fact that I also have a PCIe Revo in my rig makes it more interesting. Although I have pretty much the same experience as you regarding what will boot. I'm no hardcore overclocker, so I suspect I need some pointers. If you find the thread, please let me know.
Sent from my superior HTC Sensation Z710e using Tapatalk 2
Thanks Anvil, i thought PCIe speed doesnt much important for native intel ports, i try again as i said after removing hhds..
sure here, lots of info here not just pcie, wbc on/off, stripe sizes on ICH10R etc :)
http://www.xtremesystems.org/forums/...0+vs+Vertex+LE
Thanks for the linky!
I managed to get as far as 120MHz PCIe frequency, using 1.8v PCIe voltage
Think I'll satisfy with that for now. Here's the result of V4 from that:
http://dl.dropbox.com/u/4008284/Benc...CIe_120MHz.PNG
Regards,
@Anvil,
I wanted to check with you regarding when we might expect to see the next version?
Thanks, and keep up the good work.
Tanks!
It won't be long, just checking out some settings and/or possible tweaks on the X79. (C600)
good i can hardly wait! thanks Anvil
thanks anvil, you are legend.
Good to hear!
Will it include optional path for placing log?
Sent from my superior HTC Sensation Z710e using Tapatalk 2
I have thought about it and it is perfectly doable.
There is a bit of work related to a "remote" log and as I need to get RC1 out this weekend chances are slim.
Thanks for clearing that up. I have a different FW to test on the V4, and I would like to have a log if the same thing happens again, hence I asked. It isn't a matter of life and death here, so take your time and make it right. When it's done, it's done.
Regards,
tweak
Thanks for clearing that up. I have a different FW to test on the V4, and I would like to have a log if the same thing happens again, hence I asked. It isn't a matter of life and death here, so take your time and make it right. When it's done, it's done.
Regards,
tweak
Release Candidate 1 (2012-05-19)
- Expires September 2012
Fixes
-Fixed bug when USB drives are connected
-Fixed bug when there are unpartitioned drives
What's new
-ASU now requires Administrative rights.
-USB drives are now supported
-TRIM can be triggered if TRIM is supported by the OS, will work on most drives that supports TRIM.
-Default Compression is set to 100% (Incompressible)
-New setting for Enabling/Disabling testing with "Write-Cache Buffer Flushing" on the X79 when using the 3 series RSTe driver.
-Endurance : Layout changed
Temporary test folder renamed from _AP_BENCH to _ASU_BENCH, the folder is at the root of the drive.
In case the system shuts down and ASU is not able to clean up the test folder, the contents can be deleted.
Files produced by ASU Benchmark have the following extensions : TRM and TST
Make sure that you don't have any folders that are in conflict with the test folder.
new download link for RC1
--
New functionality will follow in the next release.
Thanks Anvil. :clap:
Just in the nick of time to try on my new Samsung today. :D
The "TRIM-trigger" needs some explanation.
To put it simply, it works more or less like the TRIM function found in Intel SSD Toolbox.
However, unlike the Intel Toolbox it can be used on all drives, it won't work on all drives though!
It won't work unless TRIM is working on the drive.
It won't work if the drive does not respond to the "normal" way of triggering TRIM.
It works in just seconds.
The time it takes for the drive to do the actual cleaning varies between drives. (from seconds and up...)
Here's an example of before and after TRIMming my Endurance drive.
Before
Attachment 126900
After
Attachment 126901
Thanks for the update.
Hopalong X -- I hope you told your wife I forced you to get an 830.
Anvil -- RC is "Le Sweet"!
Thanks Anvil for the update everything seems smooth
I like this proggie very much, hope to see more from them.
My little bench:
http://i45.tinypic.com/25kptfa.jpg
Hmm, on last version write got better, but not read?
http://i45.tinypic.com/cuuty.jpg
Looks like Win7 was doing something and can influence the bench:
http://i45.tinypic.com/ly1lf.jpg
http://i48.tinypic.com/1j0xsm.jpg
There will always be some variations on SSD benchmarks!
Having a raid controller usually masks some of the variations but SSDs just aren't performing 100% consistently.
SSDs are very different from HDDs in this regard.
Dropped off significantly from BETA 11
http://i291.photobucket.com/albums/l...nch5-19-12.jpg
Arctucas
Looks like your previous benchmark was set for 0-Fill (easily compressible data)
If you are running drives based on the SF controller that would be whats changes the score, you can change the Compression setting in Settings.
There is no change to the scoring system.
Arctucas, are those two 240GB 2281's with asynchronous flash?
@Anvil,
I just noticed that.
Anyway, I went back to RST 11.1.0.1006, and got a better result:
http://i291.photobucket.com/albums/l...RC15-21-12.jpg
I wil give the RST 11.5.9.1149 a run with 0-fill.
EDIT: RST 11.5.0.1149 0-Fill:
http://i291.photobucket.com/albums/l...1-25-21-12.jpg
It appears the new RST does affect the benchmark significantly, no?
@Cristopher,
No, 4x 64GB SSD with SF 1222 (SATA II).
@anvil, thanks for the great work, is this the right place for suggestions and visual improvements ?
(with high contrast theme)
Hi
Yes, you can post suggestions here.
As a matter of fact I'm working on an alternate theme with more "Metro" friendly/-like colors, I'll post a few screenshots when I'm ready.
thank you ;)
i'm talking about the high contrast scheme for accessibility, i use a black/white scheme
and some elements are not viewable (such as settings page) or hard to read (light gray label
on dark grey background ..)
ii know are just little things, but it's important for partially sighted person (like me)
Attachment 127212
Attachment 127213
Would this work better for you?
This is using the High Contrast Black Theme (default)
Attachment 127214
yes, it's ok, did you modify something ?
(thanks a lot)
last year i did some benchmarks analisys with a Vertex 2 60GB and ASU (beta5 ?),
some graphs :
Attachment 127222
Attachment 127223
Attachment 127225
Attachment 127224
ii think i spent a few days with excel ;)
Great work there 2f_tfe!
You're going to need an update, I had to make some adjustments.
Anvil, thanks a lot
i have got a lot of notes about the old beta versions, i will check the new RC1
to find out improvements, just 2-3 things .. don't worry ;)
bye
luke