Page 14 of 21 FirstFirst ... 411121314151617 ... LastLast
Results 326 to 350 of 520

Thread: Forum Vs Naplam - Fasted real world storage solution

  1. #326
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    meow?

  2. #327
    Registered User
    Join Date
    Feb 2006
    Location
    Germany (near Ramstein)
    Posts
    421
    *curr*

    Last edited by F.E.A.R.; 09-15-2009 at 03:19 PM.

  3. #328
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    haha
    Quote Originally Posted by LexDiamonds View Post
    Anti-Virus software is for n00bs.

  4. #329
    2.4C killer
    Join Date
    May 2003
    Location
    San Diego, CA
    Posts
    1,924
    I tried your little 40app test with my Xeon rig, was something like 7-8s and with a 4gb RAMdrive with readyboost was only like 3-4s to load the 40apps

  5. #330
    Registered User
    Join Date
    Feb 2006
    Location
    Germany (near Ramstein)
    Posts
    421
    @ Napalm

    2TB Supertalent Raid-Card with 1,4/1,2GB/s. r/w and Intel IOP348. Fast enough? http://translate.google.com/translat...ie%2F447831%2F

  6. #331
    Xtreme Member
    Join Date
    Aug 2009
    Location
    Nelson, New Zealand
    Posts
    367
    Quote Originally Posted by NapalmV5 View Post
    why go 8x mlc when 4x slc decimates the world ??
    Given the name of this forum, do you really not know the answer to that question?

    Frankly, I'm much more interested in building the fastest possible system. If 4x SLC decimates the world, and 8x MLC is even faster, then that's what I want.

    Quote Originally Posted by NapalmV5 View Post
    get 8x x25m + 4x x25e and see for yourself
    If I had the time and money to buy and test one of every possible system, I would. Unfortunately, I have to rely on pre-purchase analysis and feedback from current owners and users instead.

    Quote Originally Posted by NapalmV5 View Post
    since 4x is enough to saturate the controller 4x is the max i go on the 1231
    It's exactly that point which makes me think that a PCIe 2.0 controller has the potential of being faster overall, since they have twice the bus bandwidth available.

    Quote Originally Posted by NapalmV5 View Post
    the fewer the ssds the better the controller handles/controls.. the lower the cpu load
    Really? Do you have any tests to back that up, or is it more of a gut feeling? I suppose zero SSDs would be the lowest CPU load, but it definitely wouldn't be the fastest.

  7. #332
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    Quote Originally Posted by F.E.A.R. View Post
    @ Napalm

    2TB Supertalent Raid-Card with 1,4/1,2GB/s. r/w and Intel IOP348. Fast enough? http://translate.google.com/translat...ie%2F447831%2F
    id be interested in the 192gb slc but iop348 ?

    thats sour kraut on ice cream

    goddamn sas controllers!

    so how they do 1.4/1.2gb/s r/w ? custom iop348 ?

    whats the interface between the ssd and the iop ?
    Last edited by NapalmV5; 09-17-2009 at 12:54 AM.

  8. #333
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    Quote Originally Posted by AceNZ View Post
    Given the name of this forum, do you really not know the answer to that question?

    Frankly, I'm much more interested in building the fastest possible system. If 4x SLC decimates the world, and 8x MLC is even faster, then that's what I want.



    If I had the time and money to buy and test one of every possible system, I would. Unfortunately, I have to rely on pre-purchase analysis and feedback from current owners and users instead.



    It's exactly that point which makes me think that a PCIe 2.0 controller has the potential of being faster overall, since they have twice the bus bandwidth available.



    Really? Do you have any tests to back that up, or is it more of a gut feeling? I suppose zero SSDs would be the lowest CPU load, but it definitely wouldn't be the fastest.
    look you think you know better than me?.. no problem.. but dont be surprised when and why my 1231/4x x25e decimates your 8x mlc/pcie2 sas

  9. #334
    Xtreme Member
    Join Date
    Aug 2009
    Location
    Nelson, New Zealand
    Posts
    367
    Quote Originally Posted by NapalmV5 View Post
    look you think you know better than me?.. no problem.. but dont be surprised when and why my 1231/4x x25e decimates your 8x mlc/pcie2 sas
    I wasn't trying to assert that the 9260 with 8x MLC would be faster than the 1231 with 4x SLC. I've seen the benchmarks in the other thread. In my environment, small block random I/O rates determine performance much more than sequential I/O. The lower latency on the 1231 makes complete sense.

    The original question I was trying to ask is something completely different: how would you build the fastest possible system? First choose the fastest drives and controller. If the 1231 is the best, and if more drives or a PCIe 2.0-based controller won't help, then what's next? Stripe a bunch of them together? Are there configuration tweaks, such as NTFS cluster size, strip/stripe size, disabling cache, or what?

    Or maybe switching to a native PCIe-only controller would be better? That should eliminate the SATA latency, which would help a lot with small block random I/O.

  10. #335
    Registered User
    Join Date
    Feb 2006
    Location
    Germany (near Ramstein)
    Posts
    421
    Quote Originally Posted by NapalmV5 View Post
    id be interested in the 192gb slc but iop348 ?

    thats sour kraut on ice cream

    goddamn sas controllers!

    so how they do 1.4/1.2gb/s r/w ? custom iop348 ?

    whats the interface between the ssd and the iop ?
    I think 1,4GB/s. with cache.

  11. #336
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    if thats the case.. 1231 does 1.6+gb/s

    sas controller just cant beat 1231/1261 no way no how

  12. #337
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    Quote Originally Posted by AceNZ View Post
    I wasn't trying to assert that the 9260 with 8x MLC would be faster than the 1231 with 4x SLC. I've seen the benchmarks in the other thread. In my environment, small block random I/O rates determine performance much more than sequential I/O. The lower latency on the 1231 makes complete sense.

    The original question I was trying to ask is something completely different: how would you build the fastest possible system? First choose the fastest drives and controller. If the 1231 is the best, and if more drives or a PCIe 2.0-based controller won't help, then what's next? Stripe a bunch of them together? Are there configuration tweaks, such as NTFS cluster size, strip/stripe size, disabling cache, or what?

    Or maybe switching to a native PCIe-only controller would be better? That should eliminate the SATA latency, which would help a lot with small block random I/O.
    whats next is sata3 pcie2 controllers/sata3 drives.. none out and about yet sata6g+sas6g controller = sata3g performance <- you wouldnt want that..

    the other option is sas6g drives + sas6g controller: areca 1800 though 4x/8x sas6g drives cost more than a highend mercedes benz

    so at the moment if you want the fastest possible system without the hassle of raid cards/x # of drives theres pcie iodrive/ramsan

    ramsan: http://www.xtremesystems.org/forums/...d.php?t=230107
    Last edited by NapalmV5; 09-17-2009 at 09:42 AM.

  13. #338
    Xtreme Mentor
    Join Date
    May 2006
    Location
    Croatia
    Posts
    2,542
    I think we can all agree that the card to wait for is a new gen. one but without SAS overhead...
    The question is who will be the first and how much cheaper will it be in regard to 1231...
    Quote Originally Posted by LexDiamonds View Post
    Anti-Virus software is for n00bs.

  14. #339
    RAIDer
    Join Date
    Jul 2005
    Location
    Norway
    Posts
    699
    Quote Originally Posted by NapalmV5 View Post
    if thats the case.. 1231 does 1.6+gb/s

    sas controller just cant beat 1231/1261 no way no how
    I give you competition with my Areca 1680ix

    1680ix rules with SAS drives. I scales all the way up to 1250MB/s with 10x 15k.5 sas drives. And I got over 1gb/s read speed with 5x Vertex.

    I tested the latency. You remember. My latency was not so bad

  15. #340
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    *sigh* i thought we went over this nizzen.. no ?

    you dont got 1gb/s read.. 5x 150 = 750mb/s at best

    if you wanna be like that.. i do over 1gb/s too.. @ h2benchw and if you prefer everest.. 1.6gb/s

    5x vertex? i thought its my 4x slc vs your 6x mlc?

    let me make it easier..

    nizzen vs napalm
    1680 vs 1231
    2gb cache vs 2gb cache
    5x/6x mlc (vertex) vs 4x slc (jmicron)
    942mb/s vs 1070mb/s @ h2benchw
    1164mb/s vs 1608mb/s @ everest

    5x vs 4x
    vs

    6x vs 4x
    vs


    and thats my 100mhz pcie vs your xxxmhz pcie ??
    Last edited by NapalmV5; 09-17-2009 at 01:55 PM.

  16. #341
    RAIDer
    Join Date
    Jul 2005
    Location
    Norway
    Posts
    699
    I beat you @ average read still in everest :p

  17. #342
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    u should u got ~6x drives but not 1gb/s.. my avg is higher too due to cache

    what pcie ? 125mhz ? higher ?

    your h2benchw was @ 5x not 6x and everest @ 6x

    nizzen: 5x * 150 ~ 750mb/s hence the 711mb/s @ h2benchw
    napalm: 4x * 188 ~ 750mb/s hence the 716mb/s @ h2benchw

    so whether you like it or not your 6x * 150 ~ 900mb/s at best
    Last edited by NapalmV5; 09-17-2009 at 01:58 PM.

  18. #343
    RAIDer
    Join Date
    Jul 2005
    Location
    Norway
    Posts
    699
    LSI + 7xintel



    Last edited by Nizzen; 09-17-2009 at 02:08 PM.

  19. #344
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    note: block size: 512K

    lol show me some real apps on that ^ and then tell me you still beat me

    what is up with 194mb/s minimum ??

  20. #345
    RAIDer
    Join Date
    Jul 2005
    Location
    Norway
    Posts
    699
    Quote Originally Posted by NapalmV5 View Post
    lol show me some real apps on that ^ and then tell me you still beat me

    what is up with 194mb/s minimum ??

    But do you beat that :p

    Did you tested Pcmark Vantage? With youre controller you should beat the WR easy
    Last edited by Nizzen; 09-17-2009 at 02:14 PM.

  21. #346
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    yeh all i care about is highest bandwidth.. i can get 9260 too if id want to show off high bandwidth numbers.. you aint gonna see 9260 numbers from me

    i dont do vista/win7/vantage.. if i wanted to break WRs i wouldve submitted results..

    im more into real apps.. since thats what i do 99% of the time

  22. #347
    RAIDer
    Join Date
    Jul 2005
    Location
    Norway
    Posts
    699
    Quote Originally Posted by NapalmV5 View Post
    yeh all i care about is highest bandwidth.. i can get 9260 too if id want to show off high bandwidth numbers.. you aint gonna see 9260 numbers from me

    i dont do vista/win7/vantage.. if i wanted to break WRs i wouldve submitted results..

    im more into real apps.. since thats what i do 99% of the time
    if i wanted to break WRs i wouldve submitted results..

  23. #348
    Xtreme Member
    Join Date
    Aug 2009
    Location
    Nelson, New Zealand
    Posts
    367
    How does the Areca 1231ML do with parity RAID (5 or 6)? Still a top performer, or are the 1680 cards better?

  24. #349
    Xtreme Guru
    Join Date
    Dec 2002
    Posts
    4,046
    i dont do raid 5/6.. only do raid 0.. but why would raid 5/6 performance be diff than raid 0 performance ??

    if id do raid 5/6 id still do it on the 1231 as long as the drives are sata



    btw: i just noticed rhys been banned.. what happened ??

  25. #350
    Xtreme Member
    Join Date
    Aug 2009
    Location
    Nelson, New Zealand
    Posts
    367
    Quote Originally Posted by NapalmV5 View Post
    i dont do raid 5/6.. only do raid 0.. but why would raid 5/6 performance be diff than raid 0 performance ??
    Small block writes are slower, since there are two reads and two writes for each block (data and parity both have to be read and written).

    Also, the parity calculations themselves take time and can introduce latency.

Page 14 of 21 FirstFirst ... 411121314151617 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •