My concern is mostly based on previous systems where the raid array would easily do 100MB/s and more but when transferring via the network the PCI bus was immediately completely saturated and because of that limiting the network transfers to 60MB/s. Because of the PCI-E lanes these restrictions obviously no longer apply (the shared PCI bus I mean), but since I haven't had the chance of testing a setup like this with a 'normal' motherboard I wasn't sure what to expect.
Fantastic information, this is something I can work with. Although the Queueing theory and Markov models usually only come into play when using higher loads, it's still a very interesting read, thank you very much.Before anyone can really point you in a direction a lot more information would be needed to model your type of access requirements and setup. I've just posted in the storage/raid thread some spreadsheets to do some quantitative analysis but they are only a start.
http://www.xtremesystems.org/forums/...=150176&page=2
Really comes down to is what is your workload type, and what performance level are you seeking. From that then you design the system to match. There's also the issue of disk subsystem utilization curves which come into play (Queuing theory & Markov models like the M/M/1 or M/M/m).
As for my workload type, it's mostly transferring big files over the network (big as in 100MiB and more) to about 10 clients simultaneously so the queueing part will be quite simple and will most likely not give problems. My performance concerns are primarily related to the hardware limitations of the system.
It looks like I have enough information to start a proper analysis, perhaps I'll try calculate the Markov chain for it, it could give me some new insights.
Bookmarks