Page 20 of 220 FirstFirst ... 10171819202122233070120 ... LastLast
Results 476 to 500 of 5495

Thread: SSD Write Endurance 25nm Vs 34nm

  1. #476
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    Even at 19nm an SLC device will have a long lifetime. So businesses would be expected to buy those.
    except for SLC sales to businesses.
    industry would like to subdivide the market more into long life SLC and shorter lived MLC drives.

    All
    of this is wrong my friend. Next gen Intel Enterprise drives are MLC. ALL of them. I have spoken with several industry 'insiders' in regards to this, and most arent pleased, to say the very least. However, Intel is confident they can deliver. and they will.

    As far as the Max IOPS goes, while it is difficult to be sure without the sort of testing going on here, I would be very surprised if it did not have better write endurance than the 25nm version.
    IF it had higher endurance, it would be plastered over the front of every box, and on teh front of every single spec sheet and every single product page, and it isnt. OCZ would market it, i would bet a years salary on it.
    the reason why they arent marketing it that way?
    because it will not have higher endurance.
    MAXiops is for people who think they know what they are looking at, but dont.
    its 4k speed increases arent even realized @ the low QD, where the market segment it is intended for would realize some form of gains. how many normal people out there will notice a 4k increase @ 64 QD?
    how many will ever even hit that playing games and browsing? none.
    its fine and all for epeen, but the 25nm version benches just as fast in real world apps from what i have seen. as a matter of fact, outside of synthetic benchmarks they are exactly the same.

    the difference in vantage HDD score from a regular V3 to a MAXiops V3?
    15 points. out of 70639.
    what percentage of gain is that again?

    EDIT: MLC 25nm intel enterprise links:
    http://www.xbitlabs.com/news/storage...te_Drives.html
    http://www.tomshardware.com/news/ssd...ive,10684.html

    it is also mentioned in presentations in this years IDF. a quick google should provide more than enough verification.
    Last edited by Computurd; 06-21-2011 at 06:18 PM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  2. #477
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by Computurd View Post

    All
    of this is wrong my friend. Next gen Intel Enterprise drives are MLC. ALL of them. I have spoken with several industry 'insiders' in regards to this, and most arent pleased, to say the very least. However, Intel is confident they can deliver. and they will.
    Their flagship, the 720, is going to be SLC. Only SLC.

  3. #478
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    the 720 is a PCIe device, not a SSD. their enterprise SSDs will be eMLC. the 710 is the Enterprise drive, and it will be eMLC.

    The reshaped roadmap seen by X-bit labs postpones introduction of code-named Lyndonville solid-state drives for enterprise markets to Q1 2011 from the last quarter of this year. However, thanks to such delay, the new drives will utilize eMLC NAND made using 25nm process technology, which has considerably higher amount of write cycles than traditional MLC, even though single-level cell (SLC) NAND still boasts even higher number of write cycles.
    Even though specifications of 25nm eMLC flash are not presently available, 34nm eMLC NAND from Micron, Intel’s partner in IM Flash achieved 30 thousand write cycles – a 6x increase in endurance when compared to standard MLC NAND. In addition, last year Micron also introduced a 34nm SLC Enterprise (eSLC) NAND device that achieves 300 thousand write cycles – a 3x increase in endurance when compared to standard SLC NAND
    they will have the eSLC on the PCIe devices. note the use of "device" instead of drives.

    `````````````````````````````````````````````````` ````````````````````````````


    The Intel 700 Series is meant to replace the X25-E lineup, Intel’s enterprise series, which hasn’t been updated since late 2008 so it’s long overdue. However, neither of these is an exact successor. The 710 Series is closer with its 2.5″ form factor and SATA 3Gb/s. The 710 Series is actually pretty close to the 320 Series in terms of specs: sustained write is slightly higher but random performance is a bit lower. The biggest difference between the 320 and 710 series is the NAND type. 320 Series uses regular MLC that you can find inside any mainstream SSDs; 710 Series is Intel’s first enterprise level SSD to use MLC NAND, but not just any kind of MLC—it will use MLC-HET NANDs. MLC-HET offers more write cycles per cell so longetivity is increased, which is crucial for enterprises. The only downside is that MLC-HET will only last for 3 months after all write cycles have been used, whereas normal MLC will last for 12 months. However, this shouldn’t be an issue due to the increased amount of write cycles. For the record, MLC-HET with 20% over-provisioning (OP) appears to offer roughly 65 times greater endurance than normal MLC
    https://hohohk.wordpress.com/2011/06...ions-revealed/
    Last edited by Computurd; 06-21-2011 at 08:06 PM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  4. #479
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    98.16TB Host writes
    MWI 46

    Nothing else has changed, so, reallocated sector count is still at 6.
    -
    Hardware:

  5. #480
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    131.5TB. 32%. My reallocated sector count went up to 7 from 5 overnight.

    Quote Originally Posted by Computurd View Post
    the 720 is a PCIe device, not a SSD. their enterprise SSDs will be eMLC. the 710 is the Enterprise drive, and it will be eMLC.
    The high end enterprise offering is eSLC since it is superior flash at higher cost. The low end enterprise offering is eMLC. "Drive" vs "device" hardly matters. Both are solid state storage devices.
    Last edited by One_Hertz; 06-22-2011 at 05:27 AM.

  6. #481
    Xtreme Enthusiast
    Join Date
    Feb 2010
    Posts
    701
    Of course it matters, obviously comparing a pcie device to a hard drive isn't apples to apples, and they don't belong in the category with the hundreds of other "hard drives" in the world, you know like the ones being tested in this very thread. You're nitpicking.
    slowpoke:
    mm ascension
    gigabyte x58a-ud7
    980x@4.4ghz (29x152) 1.392 vcore 24/7
    corsair dominator gt 6gb 1824mhz 7-7-7-19
    2xEVGA GTX TITAN
    os: Crucial C300 256GB 3R0 on Intel ICH10R
    storage: samsung 2tb f3
    cooling:
    loop1: mcp350>pa120.4>ek supreme hf
    loop2: mcp355>2xpa120.3>>ek nb/sb
    22x scythe s-flex "F"

  7. #482
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Assuming continued lineal wear it looks like the 320 has ~32 days to go before it gets to 0 on the MWI, which should net ~ 195TB.

    The X25-V on the other hand has got around 44 days to get to 0, which should net ~188TB

    Then things will get really interesting.

    Click image for larger version. 

Name:	Untitled.png 
Views:	1293 
Size:	100.3 KB 
ID:	116601
    Last edited by Ao1; 06-22-2011 at 09:27 AM.

  8. #483
    Xtreme Cruncher
    Join Date
    Nov 2008
    Location
    NE Ohio, USA
    Posts
    1,608
    I can't wait that long...aaahhhh!!!! (pull at my hair with anticipation)
    24/7 Cruncher #1
    Crosshair VII Hero, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer II 420 AIO, 4x8GB GSKILL 3600MHz C15, ASUS TUF 3090 OC
    Samsung 980 1TB NVMe, Samsung 870 QVO 1TB, 2x10TB WD Red RAID1, Win 10 Pro, Enthoo Luxe TG, EVGA SuperNOVA 1200W P2

    24/7 Cruncher #2
    ASRock X470 Taichi, Ryzen 3900X, 4.0 GHz @ 1.225v, Arctic Liquid Freezer 280 AIO, 2x16GB GSKILL NEO 3600MHz C16, EVGA 3080ti FTW3 Ultra
    Samsung 970 EVO 250GB NVMe, Samsung 870 EVO 500GBWin 10 Ent, Enthoo Pro, Seasonic FOCUS Plus 850W

    24/7 Cruncher #3
    GA-P67A-UD4-B3 BIOS F8 mod, 2600k (L051B138) @ 4.5 GHz, 1.260v full load, Arctic Liquid 120, (Boots Win @ 5.6 GHz per Massman binning)
    Samsung Green 4x4GB @2133 C10, EVGA 2080ti FTW3 Hybrid, Samsung 870 EVO 500GB, 2x1TB WD Red RAID1, Win10 Ent, Rosewill Rise, EVGA SuperNOVA 1300W G2

    24/7 Cruncher #4 ... Crucial M225 64GB SSD Donated to Endurance Testing (Died at 968 TB of writes...no that is not a typo!)
    GA-EP45T-UD3LR BIOS F10 modded, Q6600 G0 VID 1.212 (L731B536), 3.6 GHz 9x400 @ 1.312v full load, Zerotherm Zen FZ120
    OCZ 2x2GB DDR3-1600MHz C7, Gigabyte 7950 @1200/1250, Crucial MX100 128GB, 2x1TB WD Red RAID1, Win10 Ent, Centurion 590, XFX PRO650W

    Music System
    SB Server->SB Touch w/Android Tablet as a remote->Denon AVR-X3300W->JBL Studio Series Floorstanding Speakers, JBL LS Center, 2x SVS SB-2000 Subs


  9. #484
    Xtreme Member
    Join Date
    Aug 2008
    Location
    SF bay area, CA
    Posts
    262
    I'm not certain of the point of this thread's experiment, but I'm glad it has brought some good technical discussion to the board.

    Quote Originally Posted by gojirasan View Post
    ...This is of course similar to the problem that hard drives are having at the moment with their increasing areal densities. As areal density goes up, signal to noise ratio goes down and 512 bytes per sector is no longer enough to contain all the ECC bits they need to correct the increasingly frequent errors.
    Well written technical statements here, I would only nitpick at the last statement, correcting it to:
    "...512 bytes per sector is no longer efficient to contain all the ECC bits they need to correct the increasingly frequent errors."
    The analogous continuous increases in page size in NAND flash dies is due to the same reason.

    Quote Originally Posted by johnw View Post
    ...As for "order of magnitude", no, not if you mean a factor of 10, as that phrase usually means. When the 3Xnm flash first came out, 5000 erase cycles was not uncommon. With the 25nm flash from IMFT, I have heard both 3000 and 5000 erase cycles. So unless you are claiming that the physics dictate that the erase cycles from 34nm to 25nm must go from 5000 to 500, it is not an order of magnitude. Besides, the 3Xnm flash currently has numbers ranging from 3000 to more than 10,000. You know why? Process improvements and binning. The SSDs usually get the best chips from the wafer. The lower quality chips go to less demanding applications, like USB sticks and consumer electronics like cell phones or media players.
    Process improvements and binning is about right- in terms of 25nm MLC:

    Temperature rating:
    Industrial temp parts- highest quality/price (1 order of magnitude better BER than commercial)

    Endurance:
    eMLC- highest endurance/price @10k
    cMLC- higher endurance/price @5k
    MLC- normal bulk of retail market/price @3k
    ES_MLC- early sample stuff, and gray market stuff- lowest price (unrated, but @~1.5k)

    So yeah.. beware of unscrupulous SSD manufacturers that sometimes release early/new drives with crap flash inside.
    "Red Dwarf", SFF gaming PC
    Winner of the ASUS Xtreme Design Competition
    Sponsors...ASUS, Swiftech, Intel, Samsung, G.Skill, Antec, Razer
    Hardware..[Maximus III GENE, Core i7-860 @ 4.1Ghz, 4GB DDR3-2200, HD5870, 256GB SSD]
    Water.......[Apogee XT CPU, MCW60-R2 GPU, 2x 240mm radiators, MCP350 pump]

  10. #485
    Xtreme Member
    Join Date
    Apr 2011
    Posts
    211
    Quote Originally Posted by gojirasan View Post
    Well I don't know if I agree with "more than makes up for it", but I agree that those things certainly can reduce the impact of a shrink if they are present. In any given case that can be a big if. Still, I don't see this as an excuse for producing an inferior product. Inferior to the one they could have made if they weren't so obsessed with saving a few dollars via a process shrink. Because, for the purpose of SSDs at least, that is all it is doing. Saving on manufacturing costs.




    Well "short" is relative. The ideal scenario for the industry as a whole is to have the products fail as soon after end of warranty as possible. You want repeat customers. Of course if their competitors are offering drives which last much longer and the public is aware of that fact they will have to increase the lifespan to compete. MLC drives are intended to be consumer devices. They are supposed to be for the average Joe who might take 10 years to write 80 TB.

    Even at 19nm an SLC device will have a long lifetime. So businesses would be expected to buy those. Keep in mind that at present the manufacturers aren't competing on write endurance at all, except for SLC sales to businesses. It just isn't a marketing point. Even when it could be as in the Max IOPS drive. My point isn't that these companies would intentionally design an SSD to fail post-warranty, but they certainly aren't going to go out of their way to stop it. When we do start seeing 2 bit and possibly even 3 bit 19nm SSDs in 2012 I think write endurance could be reduced to a point where, in 1-3 years, techies at least may start to notice something is very wrong. It wouldn't surprise me at all if the industry would like to subdivide the market more into long life SLC and shorter lived MLC drives.




    I realize that p/e is just one factor in write endurance. It's just the only factor where the manufacturers seem to be moving backwards, not forwards. Even if they can manage to increase overall write endurance despite a process shrink, it still bothers me, because they are not producing as good a product as they could if they used more robust memory with larger floating gates. Whatever happened to "bigger is better"? I want my floating gate transistors to be super-sized! Also I didn't intend to say that a single process shrink has half the write capacity. I meant that *if* it does it had better be half the cost. It was an example. I could have used 30% as an example just as easily.




    Highly educated wild-assed guesses are still wild-assed guesses, but I am also betting that they have been conservative with their p/e c. estimates in the past. That is what gives IMFT the overhead for their 3000 p/e c. 20nm flash and maybe their 5000 p/e c. 25nm flash. Although we will soon find out about that.

    As far as the Max IOPS goes, while it is difficult to be sure without the sort of testing going on here, I would be very surprised if it did not have better write endurance than the 25nm version. It is difficult to imagine how it could not. The larger floating gate can hold more electrons allowing for more fluctuations for a given voltage variation and the oxide layer can be larger too. It is an unusual case because it is a process size difference in the same controller generation so the write endurance and EDC/ECC etc must be assumed to be equal. And how could it be a marketing gimmick? I haven't noticed OCZ mention the theoretical write endurance difference at all. Although that is probably only because they would rather not even open that can of worms.

    Incidentally, it has occurred to me that the process shrinks are not all bad in theory if you take a long enough view. If they focus on making smaller sized chips, the OEMs could fit more chips on a given PCB resulting in greater interleave and higher speeds. Like going from certain 120 GB drives to 240 GB ones. It also means larger capacity drives. Although unless the price per GB drops dramatically they won't exactly be affordable. Also even if dropping a process size does halve the write endurance obviously doubling the capacity doubles the write endurance. but I don't think the limitations of floating gate NAND will scale enough for that. I think they will have to transition to something like 3D charge trap flash or even one of the emerging exotic non-volatile memory technologies.
    Finally someone that realizes that this is a business. They are not doing this for "the lulz" of doing it. Face it, 25nm is an inferior product compared to 34nm. Only thing that can hope to compensate is some controller magic, which could easily also be implemented on 90nm NAND if they wanted to etc. BUT it is not viable from a business sense etc. !

    Quote Originally Posted by zads View Post
    So yeah.. beware of unscrupulous SSD manufacturers that sometimes release early/new drives with crap flash inside.
    Hehe perhaps you meant OCZ ??? How come Corsair recalled their SF2000 drives in interest of customer satisfaction but OCZ keeps using its guinea pig customers all the time ? Something to think about. I only stick to buying SSDs from NAND manufacturers like Intel, Micron / Crucial or Samsung. Forget Corsair, OCZ, Kingston and other such dumb rebranders which are trying to scam you even more.
    Last edited by bulanula; 06-22-2011 at 11:56 AM.

  11. #486
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    People said the same things about the switch from 50nm to 34nm. There probably said the same thing about going to down 50nm.

    Quality of any NAND density is more dependent on manufacturing process maturity, but regardless NAND die size is irrelevant. You don't buy the NAND you buy a SSD and clearly the 320 is a better product that the X25-V, so why should anyone care if it happens to use 23nm?

    Increasing NAND density is (I suspect) driven more by the market that most NAND ends up in; mobile apps that need to cram as much storage space as possible into the smallest form factor possible.

  12. #487
    Xtreme X.I.P.
    Join Date
    Apr 2008
    Location
    Norway
    Posts
    2,838
    99,53TB Host writes
    MWI 45

    --

    I'm sure a lot can be said about the shrinking of NAND, I for one applaud it as long as long as I can benefit from it, my 600GB Intel 320 is one of those benefits.
    We have already put to shame some of the "myths" about the "fragile" little box of NAND, so whether it's 34nm or 25nm doesn't seem to matter, they both work as advertised, and then some.
    -
    Hardware:

  13. #488
    Xtreme Addict
    Join Date
    Feb 2006
    Location
    Potosi, Missouri
    Posts
    2,296
    Quote Originally Posted by bulanula View Post
    Hehe perhaps you meant OCZ ??? How come Corsair recalled their SF2000 drives in interest of customer satisfaction but OCZ keeps using its guinea pig customers all the time ?
    The reason why has been all over the net for a couple of weeks now. Drives were recalled because of an issue with the reference PCB used. The OCZ drives do not use that PCB.

  14. #489
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    The high end enterprise offering is eSLC since it is superior flash at higher cost. The low end enterprise offering is eMLC. "Drive" vs "device" hardly matters. Both are solid state storage devices.
    lol. you are backpedaling. it is what it is. Enterprise-class MLC, no denying that. on the SSDs.
    the pcie devices arent "high-end" SSDs. they are high end pcie devices.

    The eMLC is on the enterprise SSD.

    so...the eSLC is on the pcie device, and the Enterprise Class SSD has eMLC.

    next gen enterprise SSD----> eMLC.

    lol to say that the drive v device doesnt matter is ridiculous. how many PCIe devices can you fit in a server? now...lets do a thought process...you can put that same amount of raid cards on there with 128 drives EACH SLOT. you are talking about server real estate, one of the most valuable things in the world to data centers. they arent in the same class or segment, dont begin to pretend they are.

    remember that part about if there was some significant advance that was made they would patent it? LOL there it is, eMLC, cMLC, MLC-HET, they are there guys

    SLC for SSDs is dead. that has already been said in many places. the price v value performance just isnt there.

    Much like Google uses consumer HDD in all of its servers, instead of enterprise SAS drives. At the end of the day, its cheaper per transaction to run the damn consumer drives.

    Same thing applies with the SLC drives. thats why no one is really ting their pants to push SLC next gen SSD. the price v advantages just arent there.
    SOOOO instead of Intel selling google and the likes tons of these Intel 310s, they figured, hell if you cant beat 'em, join em. the endurance is there with the new class of MLC anyway.
    so go ahead and make the enterprise drives a good value and ridiculously awesome with the MLC and sell the hell out of them.
    Last edited by Computurd; 06-22-2011 at 02:36 PM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  15. #490
    Xtreme Member
    Join Date
    Apr 2011
    Posts
    211
    Quote Originally Posted by Praz View Post
    The reason why has been all over the net for a couple of weeks now. Drives were recalled because of an issue with the reference PCB used. The OCZ drives do not use that PCB.
    Still I am very wary of going the SF2000 route until all the bugs are sorted out. However, I have to hand it to you guys on one aspect. You and the rest ( like Tony ) of the staff on the OCZ forums do an excellent job of providing support. You should seriously be given a raise and the managers taking all of these other dumb decisions sacked. At least OCZ has got something right in all the controversy they caused !
    Last edited by bulanula; 06-22-2011 at 03:45 PM.

  16. #491
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    Quote Originally Posted by Computurd View Post
    lol. you are backpedaling. it is what it is. Enterprise-class MLC, no denying that. on the SSDs.
    the pcie devices arent "high-end" SSDs. they are high end pcie devices.

    The eMLC is on the enterprise SSD.

    so...the eSLC is on the pcie device, and the Enterprise Class SSD has eMLC.

    next gen enterprise SSD----> eMLC.

    lol to say that the drive v device doesnt matter is ridiculous. how many PCIe devices can you fit in a server? now...lets do a thought process...you can put that same amount of raid cards on there with 128 drives EACH SLOT. you are talking about server real estate, one of the most valuable things in the world to data centers. they arent in the same class or segment, dont begin to pretend they are.

    remember that part about if there was some significant advance that was made they would patent it? LOL there it is, eMLC, cMLC, MLC-HET, they are there guys

    SLC for SSDs is dead. that has already been said in many places. the price v value performance just isnt there.

    Much like Google uses consumer HDD in all of its servers, instead of enterprise SAS drives. At the end of the day, its cheaper per transaction to run the damn consumer drives.

    Same thing applies with the SLC drives. thats why no one is really ting their pants to push SLC next gen SSD. the price v advantages just arent there.
    SOOOO instead of Intel selling google and the likes tons of these Intel 310s, they figured, hell if you cant beat 'em, join em. the endurance is there with the new class of MLC anyway.
    so go ahead and make the enterprise drives a good value and ridiculously awesome with the MLC and sell the hell out of them.
    Saying "Next gen Intel Enterprise drives are MLC. ALL of them" is plain wrong. The only real difference between the PCI-E 720 and the 2.5 inch SATA 710 drives is the interface. They are both SSDs using similar technology. The bottom line is that Intel is going to be offering both SLC and eMLC enterprise storage devices. With all said and done, both offer quick storage over the PCI-E slot (whether by default or through a RAID card) and compete against one another.

    I do agree that the 2.5 inch eMLC drives are more flexible in what they offer. Price is still up in the air. Once you add a good RAID card (9265 + fastpath is $800?) to the 2.5 inch drives, we will see what is more expensive. The SLC 720 has 36 times more endurance at the same capacity, as per Intel spec, than the eMLC 710. In environments where a lot of writing is being done, I bet the SLC 720 will be much cheaper to run in the long run because it definitely won't be 36 times more expensive.

  17. #492
    Banned
    Join Date
    Jan 2010
    Location
    Las Vegas
    Posts
    936
    Quote Originally Posted by bulanula View Post
    you and the rest ( like tony ) of the staff on the ocz forums do an excellent job of providing propaganda.
    fyp

  18. #493
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    both offer quick storage over the PCI-E slot (whether by default or through a RAID card) and compete against one another.
    BS dude, and you know it, and so does those reading this.
    dont try and put these two into a competing space. they arent. they are two separate types of devices entirely.

    The SLC 720 has 36 times more endurance at the same capacity, as per Intel spec, than the eMLC 710
    yes and 128 times less available capacity per pcie slot. dodo.


    0. In environments where a lot of writing is being done, I bet the SLC 720 will be much cheaper to run in the long run because it definitely won't be 36 times more expensive.
    LOL. but how many more servers, and i mean whole entire units, will it entail setting up, simply because of lack of pcie slots. get real. it will be massively more expensive as the amount of units, processors, racks, heat and power and space mounts as you try to match the capacity of even a single server that has multiple raid cards and UBER SSDs.

    take it a step further, and use a 6gb/s SAS switch and really just pack in the gear:
    *16 non-blocking 24 Gb/s SAS, wide ports with zoning and extended cable length support
    *Eliminates the costs of SAS-based “storage islands” in direct-attached storage (DAS) environments and enables independent growth for servers and storage in traditional SANs
    *Allows for multiple servers to connect to one or more independent external storage systems enabling more efficient “scaling out” of both storage and servers in data centers and other large storage installations

    and you begin to see the pcie device lose its attractiveness in this scenario even faster. wasnt someone speaking of magnitudes of order in this very thread?

    anyways. different devices, different uses, strengths, weakness and FORM FACTORS for the love of christ. you cant lump them together, not even close.

    Next gen Intel Enterprise drives are MLC. ALL of them.
    lol. so true!
    Last edited by Computurd; 06-22-2011 at 10:38 PM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  19. #494
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    You don't seem to understand what enterprise entails, so I will just leave you with an Anand quote:

    http://www.anandtech.com/show/4452/i...specifications

    I want to start off by saying that these SSDs are aimed at enterprise use.
    The 720 Series will be Intel's first PCIe SSD. To take full advantage of it, you will need at least a PCIe 2.0 x8 slot since a x4 slot will only provide up to 2GB/s while the 720 Series provides read speeds of up to 2.2GB/s. It will use 34nm SLC NANDs, which is pretty common for high-end enterprise SSDs due to SLC's much better endurance. The 720 Series promises up to 36PB (yes, as in 36000TB) of 8KB writes for the 400GB SSD. That is nearly 1000 times more durable than 25nm MLC and over 10 times more durable than 25nm MLC-HET.
    The 710 Series seems to be the low-end offering and it's basically the same as the 320 Series with improved endurance. The 720 Series, on the other hand, is an SSD for heavy enterprise use with features making it suitable for such use.
    But I guess the 720 isn't an enterprise SSD afterall, since computurd > all.
    Last edited by One_Hertz; 06-22-2011 at 06:11 PM.

  20. #495
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    It will use 34nm SLC NANDs, which is pretty common for high-end enterprise SSDs due to SLC's much better endurance.
    True, 34nm SLC is very common for last gen enterprise ssd. but ya know, the beat marches on....onwards and upwards how many more 34nm devices do you think they are going to release in the furture? seeing as it is all going 25nm MLC.
    someone needs to corrrect this Kristian Vatto dude who wrote this and teach him the difference between an SSD and an PCIe device.

    The 720 Series promises up to 36PB (yes, as in 36000TB) of 8KB writes for the 400GB SSD. That is nearly 1000 times more durable than 25nm MLC and over 10 times more durable than 25nm MLC-HET.
    hmm... the last gen SLC used in that thar card is only 10X more durable than the 25nm MLC-HET. so the 36x number bandied about a bit recently wasn't quite up to snuff.

    seriously, not arguing that MLC is better than SLC. you just happen to notice there is NO 25nm SLC though, right-o??? SLC is going the way of the dodo.
    only 10x the durability, but how much xtra cost?

    anywho on to better things....the whole point was to illuminate very brightly the point that SLC is pretty much done. I feel that i have accomplished that. Evidenced by the fact that they arent even making further generations of it... that might be a bit of a tip-off that SLC is on its way out.

    oh and the way intel keeps repeating it over and over in IDF presentations.

    not very impressed with ol Krisitain though. he should really know to differentiate between a SSD and a PCIe NAND/SSD device
    Last edited by Computurd; 06-22-2011 at 06:45 PM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  21. #496
    Xtreme Addict
    Join Date
    Feb 2006
    Location
    Potosi, Missouri
    Posts
    2,296
    Quote Originally Posted by bulanula View Post
    Still I am very wary of going the SF2000 route until all the bugs are sorted out. However, I have to hand it to you guys on one aspect. You and the rest ( like Tony ) of the staff on the OCZ forums do an excellent job of providing support. You should seriously be given a raise and the managers taking all of these other dumb decisions sacked. At least OCZ has got something right in all the controversy they caused !
    v2.09 that was released yesterday will hopefully turn things around. All reports from users so far is very good as far as stability.

  22. #497
    Xtreme Member
    Join Date
    Aug 2008
    Location
    SF bay area, CA
    Posts
    262
    Quote Originally Posted by Computurd View Post
    lol. you are backpedaling. it is what it is. Enterprise-class MLC, no denying that. on the SSDs.
    the pcie devices arent "high-end" SSDs. they are high end pcie devices.
    ...
    next gen enterprise SSD----> eMLC.
    ...
    SLC for SSDs is dead. that has already been said in many places. the price v value performance just isnt there.

    Much like Google uses consumer HDD in all of its servers, instead of enterprise SAS drives. At the end of the day, its cheaper per transaction to run the damn consumer drives.
    ...
    Hmm.. well Computurd, you know a lot about the industry but be careful about those blanket/absolute statements you're making.
    like many things the SLC question its not as 'cut and dry' as you think..
    They certainly aren't dead- 25nm SLC brings a lot to the game.

    I consider the "PCIe based flash devices" to still be called SSDs, as most of the industry does,
    but simply differentiated as PCIe SSDs; SATA SSDs; SAS SSDs; mini-PCIe-SATA SSDs; etc etc

    Also, Google builds many different server products for its internal usage..
    you really think Google uses consumer HDD in ALL of its servers and no SSD?
    Maybe when they were just starting out..

    An inquisitive person might draw a connection linking the statements I just said here..


    Quote Originally Posted by bulanula View Post
    ...Face it, 25nm is an inferior product compared to 34nm.
    ...
    Hehe perhaps you meant OCZ ??? How come Corsair recalled their SF2000 drives in interest of customer satisfaction but OCZ keeps using its guinea pig customers all the time ? Something to think about. I only stick to buying SSDs from NAND manufacturers like Intel, Micron / Crucial or Samsung. Forget Corsair, OCZ, Kingston and other such dumb rebranders which are trying to scam you even more.
    I don't think I would call 25nm 'inferior'...
    25nm uses more sophisticated wafer processing steps and is more accurate.
    25nm has sufficient ECC to maintain the same UBER as previous process technologies.
    25nm has greater density chips allowing for higher chip/drive capacities..
    I can have my pick of 50nm, 34nm, 25nm SSDs.. I go for the 25nm..

    I won't comment on OCZ but Corsair, Kingston, etc still build their own drives based on 3rd party controller companies. They are cheaper than the Samsun/Intel/Micron, you just run a slightly higher risk of the drive dying prematurely.
    Back up your important data people :P


    Quote Originally Posted by Computurd View Post
    True, 34nm SLC is very common for last gen enterprise ssd. but ya know, the beat marches on....onwards and upwards how many more 34nm devices do you think they are going to release in the furture? seeing as it is all going 25nm MLC.

    seriously, not arguing that MLC is better than SLC. you just happen to notice there is NO 25nm SLC though, right-o??? SLC is going the way of the dodo.
    only 10x the durability, but how much xtra cost?

    anywho on to better things....the whole point was to illuminate very brightly the point that SLC is pretty much done. I feel that i have accomplished that. Evidenced by the fact that they arent even making further generations of it... that might be a bit of a tip-off that SLC is on its way out.

    oh and the way intel keeps repeating it over and over in IDF presentations.
    Well if big customers are willing to pay Samsung/Intel/Micron/etc for SLC devices then it makes sense for Intel to keep making it.. its the same thing as MLC really.
    I think the SLC vs MLC process lag is really just due to the large OEM flash consumers not wanting to have to re-qualify expensive new parts.
    And yes, SLC is still required/cheaper overall in some cases.
    Last edited by zads; 06-22-2011 at 08:30 PM.
    "Red Dwarf", SFF gaming PC
    Winner of the ASUS Xtreme Design Competition
    Sponsors...ASUS, Swiftech, Intel, Samsung, G.Skill, Antec, Razer
    Hardware..[Maximus III GENE, Core i7-860 @ 4.1Ghz, 4GB DDR3-2200, HD5870, 256GB SSD]
    Water.......[Apogee XT CPU, MCW60-R2 GPU, 2x 240mm radiators, MCP350 pump]

  23. #498
    Xtreme Guru
    Join Date
    Aug 2009
    Location
    Wichita, Ks
    Posts
    3,887
    They certainly aren't dead- 25nm SLC brings a lot to the game.
    i feel it is indicative of their intended path that they are now releasing enterprise SSD "drives" with 25nmMLC and no mention of 25nm slc anywhere.

    I consider the "PCIe based flash devices" to still be called SSDs, as most of the industry does,
    but simply differentiated as PCIe SSDs; SATA SSDs; SAS SSDs; mini-PCIe-SATA SSDs; etc etc
    I agree, there needs to be differentiation, as i stated above.
    some of that though I was just arguing for fun playing around...
    However, you know as well as i that these are clear-cut different sectors of the enterprise market. both in terms of price and the applications that they are used in. An enterprise PCIe SSD is not necessarily going to be competing with a SATA/SAS SSD. one use would probably be the PCIe for super high intensive workloads that require the utmost latency. however, that is likely going to be premium storage space in a tiered approach. They dont really compete with each other per se.


    Also, Google builds many different server products for its internal usage..
    you really think Google uses consumer HDD in ALL of its servers and no SSD?
    Maybe when they were just starting out..
    I wasnt speaking to the use of HDD over SSD. perhaps i wasnt clear in my wording. of course google uses ssd in certain farms, or even in tiered caching scenarios.

    The scenario i was using as an example is this;
    facts first
    *SAS HDD that are enterprise class cost much more. cost per transaction is high, but the reliability is UBER (supposedly)
    *SATA consumer variant HDD are much cheaper and definitely have a lower reliability curve (supposedly). however cost per transaction is extremely low. way low.

    SO...what google has done is forgo using the SAS drives, with their prohibitive costs, for using SATA consumer variant drives. In their HDDs that they do use, the "buisness model" is to merely accept the slightly higher failure rate of the consumer HDD and eat that cost. The cost of replacing them is still far far below the cost of running the SAS drives, just because the cost of SAS is so high.
    By using the cheaper drives and accepting the loss of drives as a buisness expense, their cost per transaction is significantly lower.
    Now, google has published this information freely. it is merely a google away

    the extrapolation of this point is the comparison of SLC v MLC in the enterprise space. With SLC so cost prohibitive, and MLC gaining rapidly with endurance, it is simply much much smarter to use MLC in the first place, and get it over with. Eat the cost of the reduced endurance, but in the end, you are going to come out way ahead in cost per transaction.
    Also, in todays data center, effective tiered caching is going to allow you to further enhance the viability of your MLC drives.

    They certainly aren't dead- 25nm SLC brings a lot to the game.
    but when? and where? no time in the near future.

    EDIT: in a nutshell...
    Lyndonville drives will also be created out of multi-level cell NAND Flash memory chips and will most likely also be meant for the enterprise sector. This is because, while offering a boost in performance and a lower chance of errors, SLC adds a significant extra cost compared to multi-level cell chips (MLC), a price premium that, in most cases, doesn't justify the performance enhancement, even in data centers and servers.
    http://news.softpedia.com/news/Intel...s-139500.shtml
    Last edited by Computurd; 06-23-2011 at 12:06 AM.
    "Lurking" Since 1977


    Jesus Saves, God Backs-Up
    *I come to the news section to ban people, not read complaints.*-[XC]Gomeler
    Don't believe Squish, his hardware does control him!

  24. #499
    Xtreme Mentor
    Join Date
    Feb 2009
    Posts
    2,597
    Quote Originally Posted by bulanula View Post
    Well I think everybody knows how Intel INTENTIONALLY throttled its controllers from the G1 -> G2 -> G3 so that is why the G3 has supposedly better speeds etc.
    SF drives also intentionally reduce write speeds (which also impacts read speeds) but they do it via throttling.

    SF drives cannot sustain performance levels without burning up the NAND, despite the compression advantage, hence the drives are life time throttled.

    Intel drives provide consistent write speeds throughout the life of the product and regardless of xfer size. SF drives can't do this.

    Yes you might get short term performance boosts before throttling kicks in, but once it has kicked in you are stuffed.

  25. #500
    SLC
    Join Date
    Oct 2004
    Location
    Ottawa, Canada
    Posts
    2,795
    135tb. 30%.

Page 20 of 220 FirstFirst ... 10171819202122233070120 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •