MMM
    X

    Subscribe to New Ads!

    Receive weekly ads in your inbox!



Page 3 of 3 FirstFirst 123
Results 51 to 59 of 59

Thread: Thought experiment: HDD with flash read-cache?

  1. #51
    Banned
    Join Date
    May 2009
    Posts
    676
    the number of dies may be 2 pr package, meaning 4GB dies, or one die pr package at 8GB.
    should be 2 dies per package.


    (taken from the Micron PDF).

  2. #52
    Registered User
    Join Date
    Apr 2010
    Posts
    23
    The 160GB and 80GB look the same, (only 1 side of the pcb is filled up with Flash). The 80GB uses 4GB DualDie Packages, the 160GB uses Quad Die Packages:



    That was when they still planned the Refresh with 34nm

    ( http://www.anandtech.com/show/2808/2 )

    Edit: Of course I was and am always talking about the G2! G1 only uses 4GB Packages as far as I know.
    Last edited by Eggcake; 05-25-2010 at 11:45 AM.

  3. #53
    Banned
    Join Date
    May 2009
    Posts
    676
    hhaa, so the 160GB x25-m uses 4 links per die which is what makes it faster then the dual link 80GB G2.

    the X25-E goes with 10xquad 2GB die packages, each for 8GB and another dual one with 4GB, for a total of 84GB (20GB spare area),
    and that is what is making it so expensive!

    great!

    E:
    G1 vs G2 160GB:
    More memory means that the drive can track more data and do a better job of keeping itself defragmented and well organized. We see this reflected in the "used" 4KB random write performance, which is around 50% higher than the previous drive.

    Intel is now using 16GB flash packages instead of 8GB packages from the original drive. Once 34nm production really ramps up, Intel could outfit the back of the PCB with 10 more chips and deliver a 320GB drive. I wouldn't expect that anytime soon though.
    the SSD relapse AnandTech!
    Last edited by onex; 05-25-2010 at 01:11 PM.

  4. #54
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    Quote Originally Posted by onex View Post
    hhaa, so the 160GB x25-m uses 4 links per die which is what makes it faster then the dual link 80GB G2.
    No, the 160GB uses 4 dies pr link. The exact inverse.
    The x25-M has 10 flash channels/busses/links, when there are more dies than channels, then some dies will share a channel in a serial interleaved configuration.

    Check out page 21 in this document for some performance illustration: http://onfi.org/wp-content/uploads/2...bottleneck.pdf

    This is kinda off topic, but here goes a rant:
    If the x25-M G3 uses 10 channels of ONFI 2.1 25nm NAND on ONFI 2.1 busses, the bandwidth and IOPS would be (given the numbers in the previous document) rougly:
    Single die packages, 10 packages: 880MB/s read (internal), 80MB/s write, up to 75.000 4KB random read IOPS, up to 20.000 4KB random write IOPS.
    Dual die packages, 10 packages: 1760MB/s read (internal), 160MB/s write, <75K IOPS read, <40K IOPS write.
    Quad die packages, 10 packages: 2000MB/s read, 320MB/s write, <75K IOPS read, <80K IOPS write.
    Quad die backages, 20 packages (double stacked or on both sides of PCB): 2000MB/s read, 640MB/s write, <75K IOPS read, <160K IOPS write.
    I'd hope Intel makes a version of G3 SSD, with a PCIe 2.0 x4 interface, 10 channels, dual die packages, 10 packages. Given 8GB dies for 25nm, that would give 160GB. You'd get ~1750MB/s read, 160MB/s write, <75K IOPS read, <40K IOPS write. Pretty sweet if bootable, and probably not too costly to make either.

  5. #55
    Banned
    Join Date
    May 2009
    Posts
    676
    No, the 160GB uses 4 dies pr link. The exact inverse.
    The x25-M has 10 flash channels/busses/links, when there are more dies than channels, then some dies will share a channel in a serial interleaved configuration.

    Check out page 21 in this document for some performance illustration: http://onfi.org/wp-content/uploads/2...bottleneck.pdf
    the page linked, is for synchronous signaling,

    so the X25-m 80GB G2, has a capability of 176MBps per dual die @ a single channel, on a dual plane die
    this document is from july 2008 so it could be reflecting current SSD manufacturing process at that time, the G2's should fit into these schemes.

    taking 10 channels, the SSD should go for -optimally- 176*10 = 1760MBps.
    write throughput should scale - 16MBps per dual die on a dual plane die,
    10 channels, equal, 160MBps write.

    read..
    wait,

    http://hardforum.com/showthread.php?p=1034431478
    whether it^^ is related to the signaling method,
    the X25-m seems to operate better asynchronously.
    the asynchronous performance, goes P.11,
    400MBps total read and 140MBps write per 10 channels per dual die @ a single channel.

    and X25-M's uses ONFI 1.0.
    A Quick Introduction to the ONFI 2 Synchronous
    Interface
    P.14.

    sound more reasonable.

    p.s - the NAND is working in parallel.
    ba ba ba.. featuring the latest-generation native SATA interface with an advanced architecture employing 10 parallel NAND flash channels...
    http://www.intel.com/design/flash/na...ream/index.htm

    This is kinda off topic,
    will you stop stressing youself over this off-topic thing?
    it's perfectly fine to share your thought even they are unrelated to the thread .

    Single die packages
    should be dual, (but let's see)...

    Single die packages, 10 packages: 880MB/s read (internal), 80MB/s write, up to 75.000 4KB random read IOPS, up to 20.000 4KB random write IOPS.
    Dual die packages, 10 packages: 1760MB/s read (internal), 160MB/s write, <75K IOPS read, <40K IOPS write.
    lol, exactly what is going through one's mind when following that awesome document,
    SATA 3 interface, 600MBps available throughtput,
    320GB drives, ONFI 2.1, synchronous interface (double bandwidth), la la la la,
    they can bring a bomb to the market :YAY:.

  6. #56
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    If you want proof of the performance being avalible, just check out what fusion-IO did back in 2007-2008 with their ioDrives.
    In 2008, you could buy a 80GB SLC SSD with PCIe 1.1 x4 interface able to do: 750MB/s read, 500MB/s write, 100.000 IOPS read, 80.000 IOPS write (the 100K IOPS write was for the 160GB version).
    If i don't remember very wrong, the ioDrive uses 12 channels (or was it 24?) in a low latency cluster setup. Simple math says 750/12=62,5 (MB/s pr channel), and 750/24=31,25 (MB/s pr channel). If it used async NAND, it would likely be 24 channels, if it was early sync, it could be 12 channels.
    EDIT: the ioXtreme does 700MB/s(+) read from 80GB of MLC.

    Anyways, i have written a long discussion about this general subject in norwegian on another forum (about 3000 words i think it was), and i will be posting a dedicated thread to an english version one of the comming days.
    I'm contemplating making a "GullLars' SSD rant/discussion thread" for technical discussions and "what if" or "how awesome would *blablabla* be" as a sort of consolidating thread. We've had a simelar thread on the norwegian forum i mentioned for 1,5 years now, and it has over 7500 posts and 290K times read. About 1/3 could be summarized as FAQ QnA's, and about 1/4 is help for deciding what SSD / if to buy SSD. If you go by word count, about 50-60% is technical discussions and rants about tech sites reviews. I'm responsible for about 20-25% of everything written in the thread XD
    Last edited by GullLars; 05-25-2010 at 03:19 PM.

  7. #57
    Banned
    Join Date
    May 2009
    Posts
    676
    If it used async NAND, it would likely be 24 channels, if it was early sync, it could be 12 channels.
    you don't get it,
    ONFI 1, uses asynchronous NAND, ONFI 2, uses synchronous NAND.
    asynchronous transmission is slower then synchronous transmission as the signal has to go through a single line while being modulated,
    the modulation process means, each byte, word or Dword has to be separated by a signal telling the receiver where the data packet start and end,
    the same as it goes for internet protocols.
    synchronous transmission is said to be expensive as it is harder to implement,
    it involves two lines when the clock signal works on on and data on the other,
    a asynchronous transmission is not really asynchronous yet it is called that way, maybe to separate it's naming from synchronous signaling.

    so,
    synchronous signaling IS faster,
    every byte flowing through it means data as an opposite to asynchronous signaling.
    kapish ?

    i just think you go over some important stuff, not paying attention to it,
    it happens,
    first, you have to fully figure out what is happening inside a controller or an IC circuit and not think it is easy and you already understand it,
    there are many pieces of data which are coming in between,
    if you don't understand something, google it, or say,
    i don't understand.
    and then offer idea's,
    it's the same thing which was told you before actually.
    Last edited by onex; 05-26-2010 at 12:55 PM.

  8. #58
    Xtreme Enthusiast
    Join Date
    Dec 2009
    Location
    Norway
    Posts
    513
    I have read several white papers on NAND flash architecture, ONFI specification (PHY included), controller design, signaling, ECC, etc. I feel i understand it well enough to not make huge blunders, but i'm by no means an IT professional or hardware engineer. Since most of the details of the low-level stuff (hardware wise) is not directly relevant to the topic here whitout increasing the complexity by a lot, i felt it was best to keep from going into it here. This is the sort of thing i could discuss in the SSD discussion/rant thread i made though

  9. #59
    Xtreme Cruncher
    Join Date
    Oct 2008
    Location
    Chicago, IL
    Posts
    840
    Epic fail:

    http://hardware.slashdot.org/article.../06/02/0156217

    Their newer, more expensive hybrid performs more slowly than their older (and cheaper) generation.

    I'm shocked I tell you, shocked!

    I hope for their current customer's sake that this can be fixed with a firmware update.

Page 3 of 3 FirstFirst 123

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •