The V should have 5k cycle NAND (34nm).
Printable View
The V should have 5k cycle NAND (34nm).
Morning update:
310 hours, 91.2361 TiB, Wear Leveling Count and Percentage of the rated lifetime used has gone from 50 to 48.
Avg speed reported from anvils app is 88.37 MiB/s
MD5, no errors.
Attachment 117650
Is anyone testing endurance of the intel 520 series? Or do we expect that to match one of th profiles we're currently testing due to same nand tech being used or similar? Trying to find relatively in-expensive drives for a write cache for the array which I would expect >200TiB of writes per drive in say 1 year. I know the pliant ones can do this but they're 10x+ the cost of everything else.
@stevcs
I expect you are thinking of the Intel 510 Series SSD (the 520 Series is due Q4?)
There are currently no-one testing the 510 series, being 34nm it should do well vs the m4, you should also consider the 320 Series 80-120GB as 25nm looks to be great!
The Marvell 9174 Controller is used by the Crucial M4, Intel 510 & Corsair P3, however both Intel & Micron produce their own unique firmware. There may also be other significant differences. NAND configuration, DRAM, Spare Area etc.
Intel do however specify workload parameters for Client and Enterprise applications, so those specs could be used to determine suitability for a particular workload. As can be seen in this thread those workload estimates appear to be quite conservative.
A small increase in reserve area would be enough to significantly increase work load capability, if the workload was a problem.
@anvil- you're correct, sorry (always thinking a generation ahead) but I meant the 510/elmcrest not the 520/cherryville. Problem with the 3xx is that they're all 3Gbps, not a problem in itself (as I don't except a single drive to do more than 100MB/s) but I'm looking to put them all into external chassis so want to make sure that all are running at 6gbps for the expander(s). (I've run into problems before mixing/matching, it's better to have everything the same signaling).
@Ao1, yes, that's what I thought as well. I was actually planning on 28% or so over-provisioning for whatever SSD. Just trying to pick the right ones for the workload. Since this is a 'write cache' drive (it's main purpose is to cache all random writes to the back-end datastore which is ~200TB) with about 100GB/day I don't want to have the replacement issues at work (don't know the actual brand, but EMC&Oracle use SSD's for their Tier 0 in the sans, for heavy database functions they don't last a year. Not a big deal as the client's are paying for the speed so the thousands $$/drive is not an issue. A 'little' different for a home system however. ;)
While it can't be shown in this test here is a test someone ran on an Intel X25-V measured durability
http://translate.google.com/translat...ogle.com&twu=1
Don't know if this will help but as a guide the Intel 320 is spec'd for around 15TB with a 4K 100% random workload over the full span. (TB varies depending on drive capacity). Reducing the span and adding over provisioning will increase write capacity significantly, but 200TB of random 4K writes is going to be a tall order for any non enterprise SSD any way you try to cut it.
@stevecs
What workload are you talking about?
If there are loads of small random writes then you need to buy a drive like the X25-E or the new 710 Series.
The 710 Series is priced at ~$700-$750 per 100GB and is available in 100GB, 200GB and 300GB capacity.
(price based on 4K NOK ex.vat for the 100GB and a 5.5 exchange rate)
Actually, I /WAS/ waiting on the 710 series for this, it's just that I'm antsy and wanted something sooner considering it seems they keep pushing things back in release schedule right now I can't even find a solid release date for the 710's. The E's are such old technology and frankly don't hold up really to that type of workload I would have (4KiB writes generally and then block reads as that write cache is flushed to the main storage system). So not too friendly for SSD's but the latency is the key here so it's either SSD's or battery-backed ram;
I've heard within a month or so, as the price has surfaced the last few days I expect there is hope for a such a timeline.
The Intel 710 series is SATA 3Gbps.
Eveningupdate:
Due to an power failure my pc got shut down this evening. When I startet up again Anvils app startet from 84 TiB (from when I updated to the latest ver) Anvil is helping me to fix it so the log is correct. This evening you just have to enjoy my smartdata.
1635 P/E used ~95 TiB. Speed before the power failure was 88.36 MiB/s, AD is down form 48 to 46. No MD5 errors before the shutdown.
I'll start up again when it's ok.
Attachment 117667
Every loop is saved in a second table so BAT's issue was fixed.
--
159.08TB Host writes
MWI 13
Reallocated sectors, 6.
MD5 OK
Do you guys think the Intel 520s will use Sandforce controllers ???
ATM Sandforce I is to be avoided because of cold start bug and other problems people are having with these drives suddenly dying and write throttling as well. Also, the NAND used is of arguable quality and the various bait and switch ( 25nm and Spektek etc. ) methods used by vendors is not reassuring.
I don't know about SF II but I doubt much has improved since I see loads of people having issues there as well.
If you want the most usable SSD for daily use then I think this thread has shown that you cannot go wrong with the Intel, Crucial and the Samsung.
What do you guys think ???
Maybe someone can setup an automatic reboot and re-secure erase and repeat etc. script so we can test the secure erase endurance too without needing manual input ??? It should not be too hard to do on Linux with an old machine etc. Too bad I don't have the SSD or the cash to dedicate for one because I could get this old machine and the script ready. Maybe someone else can try on their own or help me do this ???
I think it really is important we also test this aspect of endurance as this really pushes the SSD to its limits as it is like writing to the whole capacity of the SSD in about 10 seconds and uses one NAND cycle on all of the SSD's cells etc. !
Intel should stick to Intel controllers... I don't want anything SF based at this point.
Has anyone started testing a 240GB Vertex 3 yet?
lifetime write throttling would make it impossible.Quote:
Has anyone started testing a 240GB Vertex 3 yet?
Morningupdate:
My problem from last night was effectively fixed by Anvil and now everything is going as normal.
97.2497 TiB. 332 hours (the downtime was 2 hour) Wear Leveling Count and Percentage of the rated lifetime used has gone from 46 to 44.
Avg speed reported from anvils app is 90.42 MiB/s. Looks like the 2 hours break made my speed improved some.
MD5, no errors.
Attachment 117679
Does nobody think the secure erase endurance testing is as important as what we are doing now ???
Actually, I think it is more important because it stresses the SSD and its cells much more than the continuous writing we are doing now etc.
@bulanula
No, at the moment this is the test that's going on and it will continue to run for quite some time, so, all my resources are bound to this test method for quite some time.
You don't need to ask every other post, I do read every word that is written in this thread and you are the only one pushing that test "pattern".
As One_Hertz mentioned, it will most likely be a manual process and that would make the test time-consuming.
--
160.46TB Host writes
MWI 12
Reallocated sectors : 6
MD5, no errors.
I just want to point that, even thou the test would be interesting, it does not have any relevance compared to a real life test. Testing random 4k writes would simulate a heavy OS paging while testing continuous erases does not have similar real usage pattern. And this might be implemented in a different way from one manufacturer to another. For example, if I would produce SSD firmware, I would definetly add an "if" statement that would skip erasing blocks that are already erased. Writing a few MiB/GiB of data before each erase also does not give you any warranties that you will hit all blocks because of the way different controllers choose to cluster pages.
C300 Update
50.51TiB, 83 MWI, 850 raw wear indicator, 61.8MiB/sec, MD5 OK.
Attachment 117681
Updated charts :)
Host Writes So Far
Attachment 117690
Attachment 117691
(bars with a border = testing stopped/completed)
Raw data graphs
Writes vs. Wear:
Attachment 117692
MWI Exhaustion:
Attachment 117693
Writes vs. NAND Cycles:
Attachment 117694
Normalized data graphs
The SSDs are not all the same size, these charts normalize for total NAND capacity.
Writes vs. Wear:
Attachment 117695
MWI Exhaustion:
Attachment 117696
Write-days data graphs
Not all SSDs write at the same speed, these charts factor out write speeds and look at endurance as a function of time.
Writes vs. Wear:
Attachment 117697
MWI Exhaustion:
Attachment 117698
Sandforce really depends on the SSD manufacturer (OCZ, Kingston, Patriot, etc) to enforce their own quality testing and production procedures. While this limits Sandforce's financial risk and investment costs for production and such, that sword also cuts the other way- it opens the door for these drive manufacturers to release improperly validated designs that are then back-associated to Sandforce's quality themselves.
But in all honesty, knowing all that I know, if I wanted to buy an SSD today,
I would still buy OCZ Sandforce-based SSD drives (3 year warranty)..
they are typically best bang for your buck.
And I wouldn't be one of those that complain about getting a 25nm vs 34nm drive..
As with any type of mass storage, just make sure you back up your vital data..
Not "Vertex 3" but does 256GB SF-2582 count?
https://lh6.googleusercontent.com/-8...9305699018.jpg
OEMs :)
Its quite the opposite, actually.
Secure Erase stresses the cells less than continuous writing,
unless you are talking about drive that has Mil-spec Secure Erase features (and none of you have that)
Eveningupdate:
100.7112 TiB
343 hours
Avg speed 90.32 MiB/s.
AD gone from 44 to 42.
P/E 1755.
MD5 OK
Attachment 117710
Looks like the Samsung is still pulling through. Impressive to say the least, considering it has quite a higher WA compared to the Intel / Crucial drives. Maybe their NAND has more cycles compared to Intel / Micron ONFI NAND ???
Did you get to try it? (no hurry)
How is the 320 doing :)
--
Having some minor issues on the AMD rig, not sure what's going on and so I've moved the drive to an Intel rig just to make sure that there are no issues with the drive.
(no SMART errors reported but the drive is dropped and is logged with reference to a "controller error", happened twice in 10 minutes)
So, either it's wearing out the IO sub-system on the MB or there are developments on the drive.
I've run diagnostics (Intel Toolbox) and there were no issues, it's running the "Endurance test" now just fine although it's been just a few loops.
I'm going to let it run a few more loops before I'm moving it back.
161.82TB Host writes
MWI 12
Reallocated sector count : 6
MD5 OK
ooohhh high drama! interesting!Quote:
Having some minor issues on the AMD rig, not sure what's going on and so I've moved the drive to an Intel rig just to make sure that there are no issues with the drive.
*grabs popcorn*
Doesn't look like the drama is caused by the SSD, it's been > 45 minutes and no issues/errors.
It's late and so I'll just have to move it back to the dedicated test-rig, will know for sure what's going on in 6-7 hours :)
edit:
OK, I've stopped the testing and ran a Full Diagnostics scan, all OK.
Attachment 117714
Odd that the motherboard would start taking issue with the testing before the SSD :wth: :lol:
:)
That port has handled some 100TB of writes.
Well, lets see what tomorrow brings, moving it back to the test-rig right now.
Amazing thread and information and great to finally have solid evidence that SSDs will last much longer than most would ever expect...especially since I have been purporting such since 07.
Hate the fact that I came in late and now trying to catch up with all but would be great to have a chart that shows total TB written before end life followed by the number of years that would have totaled at the total daily write estimate that they base their calculations on.
Personally, I see this as much bigger than most would think and have at least 3 manufacturer reps looking this over after sending them the link earlier. It truly is the closest we have seen to reliable endurance estimates and sure puts to rest any thought that SSDs may have a limited life span.
Just my two cents and thanks all and especial Anvil for that bench,ark which, IMHO, is the best going right now and I have used it in several reviews...Thank you very much Anvil!
Morningupdate:
104.0416 TiB
353 hours
Avg speed 89.90 MiB/s.
AD gone from 42 to 40.
P/E 1812.
MD5 OK
Attachment 117721
Not sure what really happened, it has to be related to the test-rig in some way as the drive was moved within minutes to the other rig and it had no idle time and diagnostics were all OK.
(could be caused by a lot of factors like, cabling, heat, OS, drivers,...)
I'll just have to keep an eye on it, it's been running as usual since I restarted the test, no MD5 errors, nothing.
163.07TB Host writes
MWI 11
Reallocated sectors, 6
MD5 -> OK
Eveningupdate:
107.9518 TiB
366 hours
Avg speed 89.60 MiB/s.
AD gone from 40 to 38.
P/E 1880.
MD5 OK.
Attachment 117754
C300 Update
56.83TiB, 81 MWI, 957 raw wear indicator, 61.8MiB/sec, MD5 OK
Attachment 117756
Nice to see the Intel almost going past the 200tb limit!
Morning update:
111.2420 TiB
377 hours
Avg speed 89.43 MiB/s.
AD gone from 38 to 36.
P/E 1938.
MD5 OK.
Attachment 117772
Strange the M4 is already at MWI 36. Wonder what this value really means and how the C300 will behave while in the same stage!
Nice work, thanks for sacrificing your SSDs in the name of science.
Its reassuring to know that they might have the stamina for things like Intels Smart Response without having to spend the $$$ on something like Larson Creek
Eveningupdate:
115.3564 TiB
390 hours
Avg speed 89.23 MiB/s.
AD gone from 36 to 34.
P/E 2010.
MD5 OK.
Attachment 117785
Heh, but I still would not bet on the Samsung to write more than either Intel before failure. Even though the Samsung has 64GiB of flash on board and the Intels have only 40GiB and 48GiB, I suspect the Samsung has a much higher write amplification, which is a significant handicap. But I am not certain about the WA, so I could be wrong about that. We will see!
Has anyone heard from One Hertz? It has been a few days since he posted or I missed something.
The Intel 320 40GB has 48GiB of NAND? Hmmm, that would put its WA at like 1.2x, which I find odd (being worse than the X25-V). Could that have been as a trade for speed compared to the X25-V? Does it really need 28.8% (37.27GiB usable on 48GiB NAND) spare area?
Either way, 48GiB of NAND changes the charts.
C300 Update:
62.53TiB, 79 MWI, 1052 raw wear indicator, 61.85MiB/sec, MD5 OK
Attachment 117799
I'm not sure about the NAND on the 320 40GB, I'm sure One_Hertz can find out, I'll check mine as well as soon as I can get that torx screwdriver. (will try tomorrow)
I'll make my next report in about 8-9 hours, been away for the weekend and the rig has been powered off.
No, no, no! ;)
The 40GB 320 does indeed have 48GiB of flash on its circuit board, that is a fact (no need for anyone to double-check unless you really want to), but you need to read what I wrote! Only 40GiB of the flash is used for normal operation. The extra 8GiB is used for XOR parity data. So WA should be calculated assuming 40GiB of flash.
That is similar to Sandforce SSDs where, for example, a 120GB Sandforce drive usually has 128GiB of flash on-board, but only 120GiB of the flash is used for normal operation (data storage and reserved space), while the extra 8GiB is used for so-called RAISE, which is Sandforce's name for RAID-4 like parity. Although I am less certain with Sandforce (as compared to Intel 320) how they actually implement RAISE. It could be RAID-5 like, as far as I know. But I'd guess RAID-4 like.
217TiB. 24 Reallocated sectors. Anvil - I am trying to determine if I can locate that hidden smart variable. I got the output from your app and will start tracking it to see changes.
They don't specify on their current slides for RAISE, but their older ones that they provided to Anandtech said "like RAID-5." They wouldn't go into detail about it, however.
It is amazing to me that they could dedicate such resources to parity on these devices. I wonder at the sophistication of the parity scheme. For instance, if you look at many raid controllers, etc (same concept essentially as ssd with its controller and nand) running a parity raid set can really cripple write speed in many scenarios. And these full blown raid controllers are with chips that are immeasurably faster than the low wattage chips present on an SSD.
I knew the 160GB and larger versions had parity, kind of figured it was dropped from the smaller sizes for $$$$ reasons. I never knew this about the smaller 320s, very interesting. Went and read more on the 320, seems the parity was introduced to make up for 25nm deficiencies (inline with what SandForce/vendors did when going to 25nm, just doing the opposite of taking away usable space and calling it the same size on the label).
Frankly, I would prefer to calculate WA based on total onboard NAND regardless of its designed usage (WA inflated from parity doesn't seem so bad as long as it's explained...after all, for every 1 byte sent, ~1.2 bytes do get written). But we don't know exactly how much parity data is being written so I won't (not sure it's a safe assumption that the full 8GiB of the sixth die is used for parity). I'll continue backwards calculating WA using wear indicators multiplied by total non-parity NAND (not that WA is fluctuating for any of the drives). And with the Sandforces, no need to backwards calculate WA because of SMART 233.
As for normalized writes, it's probably easiest to base it on IDEMA capacity. This means everything, relative to the other drives, is unchanged except the 40GB V2, which I had based on 48GiB and will now base on 40GB. This means 55GB for the '60GB' 25nm SF-1200 drive (not sure what SF-2200 has for capacity of their 25nm 60GB SSDs). (aside, I realize normalized writes don't quite work if OP proportions varies within a product line, sigh)
Out of curiosity, how much NAND do the 80GB and 120GB 320s have?
With the new (to me) knowledge that the 320 40GB has a parity scheme...I don't see the 320 40GB dying for quite awhile.
CLEARnand does have the integrated controller on the nand itself, offloading the proc of ECC functions.
Right, we do not know. Intel calls it "XOR parity" which is fairly vague (obviously for single parity there is an XOR function, so XOR does not really add any info) and has reportedly described it as "RAID-4 like", but not exactly RAID-4, which makes sense because if it were exactly RAID-4, the parity flash would wear out much sooner than the other flash because RAID-4 parity has to be re-written every time any of the other flash chips are written. So Intel must be using some tricks to avoid wearing out the parity flash too quickly, so we cannot make any assumptions about how much parity data is written. Regardless, I suspect that the block-erases for the parity flash are not included in the SMART attributes.
As for how much flash is on-board 80GB and 120GB Intel 320 SSDs, I am not certain. I don't have any of those models, and I have been unable to find a circuit-board picture for those anywhere on the Internet. The ones I know for certain are 40GB/48GiB, 160GB/176GiB, 300GB/320GiB, 600GB/640GiB. I'd guess the 80GB and 120GB models have either 8GiB or 16GiB extra (one or two 8GiB packages).
Morningupdate:
117.8156 TiB
399 hours
Avg speed 89.10 MiB/s.
AD gone from 34 to 32.
P/E 2053.
MD5 OK.
Attachment 117811
Here is a circuit-board picture of the intel 320 80GB
Link
EDIT:
I had written a bunch of words about how I could not believe that was a production unit since it only had 80GiB of flash.
Then I looked again. The tenth flash chip, in the lower right corner, is 16GiB :eek:
29F16B08CCME1
So the 80GB model has 88GiB of flash (nine 8GiB and one 16GiB)
(insert clip of Christopher Lloyd yelling "88 gibibytes!")
Attachment 117815
I also see that the 120GB model apparently has six 16GiB and 4 8GiB packages for a total of 128GiB (the link says the gross capacity is 128 MiB, oops!), according to the table on this page:
http://translate.google.com/translat...20_Series_SSDs
Nice! Thanks for posting the pictures.
So, 40GB, 80GB, and 120GB Intel 320 models all have 8GiB of flash for parity. The 160GB model has 16GiB, 300GB has 20GiB, 600GB has 40GiB for parity.
It is also interesting that the only 320 models that have flash on the back of the circuit board are the 160GB and 600GB models.
164.98TB Host writes
MWI 10
Reallocated sectors, glued at 6.
MD5 OK, 33.06MiB/s on avg for the last 16.5 hours.
edit:
@One_Hertz
Lets hope there are some interesting figures showing up.
johnw and omgFire
Thanks for the info and pictures.
That fills in a lot of info for me on my Intel 320 120GB.
Very interesting on the 320s' various spare area sizes.
C300 Update
65.644TiB, 78MWI, 1105 raw wear indicator, no reallocated sectors, MD5 OK, 61.85MiB/sec.
Well the other way you could say it is,
They (Sandforce, Intel, etc) take a type of performance hit (or price hit, depending on how you look at it) to improve error correction capability by more than many many orders of magnitude..
They need to guarantee data integrity better than hard drives for their serviceable lifetime..
oh yes the gains are definitely worth it, i wish we had more data on the exact parity scheme used, and whether or not it is handled by the controller. amazing they pull it off with such low wattage devices, i wonder what kind of performance could be had if you used an apparently low-overhead parity set, such as these used, on a full powered raid card.
According to post # 783 an Intel 25nm 64Gb Logical Unit = 69,120Mb or 67.5Gb
64Gb LU = 69,120Mb / 8.4375GiB
128Gb LU = 13,8240Mb / 16.875GiB
6 x 16.875 + 4 x 8.4375 = 101.25 + 33.75 = 135GiB
Of the 8,640 page size 448 byes are for ECC.
Total NAND Capacity = 135GiB
Total NAND excluding ECC = 128GiB
Format Capacity = 111.70GiB?
Intel SSDSA2CW120G3 has 234441648 LBA sectors, or 111.79GiB. Exactly as shown in the Intel Solid-State Drive 320 Series Product Specification.
172.938 TiB, 478 hours, sa177: 1/1/14254
Hey, the 470 has overtaken the G2!
Average speed reported by Anvil's app has been steady at about 112MB/s.
The other two unknown SMART attributes, 178 and 235, are still at 72/72/276 and 99/99/2, just as they were when the SSD was fresh out of the box.
omgFire
My Intel 320 120GB is showing 111.79 GB in Windows Disk Management.
111.8GB Intel Tool Box.
Whether GiB or GB no argument from me just agreeing my actual numbers match yours posted. :up:
So Intel Toolbox is reporting in GiB. It only reports tenths as last digit. 0.0
Windows Disk Management is reporting GiB at least on some SSD's. On my Intel 320 anyway.
It's not a GB/ GiB translation issue. What I tried to point out was that a 64Gb logical unit actually consists of 67.5Gb, the difference being set aside for ECC.
Evening update:
122.8014 TiB
415 hours
Avg speed 88.93 MiB/s.
AD gone from 32 to 29.
P/E 2141.
MD5 OK.
Attachment 117838
166.36TB Host writes
MWI 9
Reallocated sector count 6
37.14MiB/s avg speed (8.5 hours), MD5 OK.
It was just a question of time before the others would have overtaken this old-horse, not long before B.A.T is ahead as well :)
Well it will still take some time. From tomorrow evening I need to stop the test for 3 days. We are going on a tripp and the rigg will be taken with me and is not operational again untill friday afternoon.
Evolution of reallocated sectors is interesting. Reallocation events during endurance tests were reported only on Intel and Sandforce drives. Not taking in to account the later one I would say that until now, Intel is either having more worse flash than others, either is more honest.
The rabbit flashes by the turtle and eventually succumbs to cardiac arrest in the marathon. The turtle resumes the lead and wins by a heart beat!
The Samsung has more than proven itself for normal use!:up:
Updated charts :)
Host Writes So Far
Attachment 117847
Attachment 117848
(bars with a border = testing stopped/completed)
Raw data graphs
Writes vs. Wear:
Attachment 117849
MWI Exhaustion:
Attachment 117850
Writes vs. NAND Cycles:
Attachment 117851
Attachment 117852
Normalized data graphs
The SSDs are not all the same size, these charts normalize for total NAND capacity.
Writes vs. Wear:
Attachment 117853
MWI Exhaustion:
Attachment 117854
Write-days data graphs
Not all SSDs write at the same speed, these charts factor out write speeds and look at endurance as a function of time.
Writes vs. Wear:
Attachment 117855
MWI Exhaustion:
Attachment 117856
:) We'll just have to wait and see.
Based on the speed of that thing I can't see it doing any background GC nor static wear-leveling.
It just looks a bit out of control but it's apparently not :)
Hard to tell with the Samsung, there should have been a few more SMART attributes, I've got a hunch that it can still go on for quite some time.
167.71TB Host writes
MWI 8
Reallocated sectors : 6
34.98MiB/s on avg (~20 hours), MD5 OK.
Last update for a couple of days. I'll be back with testing late friday or early saturday.
128.5638 TiB
434 hours
Avg speed 88.84 MiB/s.
AD gone from 29 to 26.
P/E 2242.
MD5 OK.
Attachment 117897
Attachment 117898
Any interest in testing a modified Crucial M225 64GB???
I have one that had about 3 months of use before I pulled it from my home system (was with another one in RAID0). It had a couple hundred GB of use on it I believe (definitely under 1TB and also was at 98-99% life) when pulled. I then flashed it to a Vertex Turbo. Why?, cuz I wanted to know if it could be done and why not? The flash was destructive and reset everything and also no longer say Crucial M225.
It's been sitting around for about 6-7 weeks since the OCZ flash. I'm not as savy as you guys so I'm not sure it has enough smart data. Plus it might burn up too soon since the controller and NAND are running at higher than rated speed. But, I'd be willing to toss it on my work PC as a spare drive and let it run. The problem I see having is that I'm running XP, so NO TRIM, only GC to rely on or the Wiper Tool.
Here is some current info/benches...
http://img607.imageshack.us/img607/346/cdi071911a.jpg
http://img19.imageshack.us/img19/721...zvertextur.png
http://img833.imageshack.us/img833/1...5oczvertex.png
http://img696.imageshack.us/img696/3928/unled6uf.jpg
Don't forget that even though it says OCZ Vertex Turbo it is a Crucial M225. Let me know what you guys think.
EDIT: Controller is Indilinx Barefoot IDX110M00-LC, Cache is Elpida 64MG SDRAM S51321CBH-6DTT-F and NAND is Samsung K9LBG08U0M-PCB0.