Quote Originally Posted by dinos22 View Post
it could be early bioses as well who knows

but i would definitely NOT trust software like sandra

32M SuperPi is still the best measure....tapakah is showing a 1% difference with same settings.......that is pretty big for Pi so i'll take it
I'd trust Lavalys/Sandra more than Superpi, to be honest, especially when you want to put a number on the performance scaling of brand new technology. Superpi is good if you want to compare technology you understand, not really that good if you don't know the technology by heart.

Bios release is a possibility, but I can't understand why Intel would send out motherboards that don't feature the benefits of triple channel right away. This is one of the KEY features of the X58/Nehalem platform, it wouldn't make any sense.

In addition, even if the bios isn't ready, why would Intel keep this information internal? They know people will focus on this features, why on earth would they choose for the path that leads to bad publicity when different reviewing websites claim triple channel just doesn't work.

Quote Originally Posted by Metroid View Post
Nvidia 680i gives me almost 100% more bandwidth going from single to dual channel, not sure about Intel chipsets as I have never tested it using Sandra.
Different settings, different results. All I know is that the results of the triple channel that we see now is way too low to be correct. And if they are correct ...

Quote Originally Posted by bingo13 View Post
They are so wrong...at least in the tests we ran.
Just got the QPI performance scaling confirmed by another source, so I wonder what tests you ran ;-).

This makes me certain that I'll need a setup myself to test everything out. Why oh why did I pass for the Madshrimps Nehalem coverage ...

Quote Originally Posted by Calmatory View Post
Well, NF2 was back in 2002, 6 years ago. Memory bandwidth demand has increased quite alot from those days, and thus the comparison from NF2 is quite much worthless IMO. Besides, I only saw sub 15 % improvements, though, the RAM was cheap kingston and FSB was sub 166 all the time.
Actually, I was comparing with NF2, because that's when I first experienced the benefit of dual channel, just like the Core I7 is the first platform that uses triple channel.

I was running 260+ FSB, maybe not such a fair comparison.

To be honest, I should re-read some reviews to draw a decent conclusion, but I think you get my point when I show you the table with 0% improvement going from dual to triple channel. In the past we always DID notice the bandwidth increasements, 'we' as in the (extreme) overclockers. The fact that we are NOT noticing them at the moment is a sign.

Quote Originally Posted by JumpingJack View Post
What it boils down to is that most of today's client applications do not produce a demand that exceeds even modest memory bandwidths, aided with a strong cache structure. Increase in BW either by clocking up the bus or increasing memory clocks gives minor improvements, in most cases -- some exceptions are WinRAR's internal benchmark which all it does is read/writes random data to memory while executing it's compression engine... it shows significant sensitivty to BW. I have also seen noteable sensitivity with Mainconcepts H264 encoder.

So, in what Dr. Who? is saying, at 12 GB/s + memory bandwidth is not really going to impact what you observe in real life -- not because the BW is not real, but because the applications used for desktop never deman throughput that exceeds the capabilities.

You will see the BW play an important role in 2S servers, where those applications are more throughput oriented as opposed to client side which are really just task based.
In real-life applications, I don't even worry about dual channel. You're not going to notice anything when opening Internet Explorer or Word, but you will notice when you run resource hungry programs such as video encoding or data compression programs. But isn't the Core I7 / X58 platform designed for the normal end-user? I don't see why you bring up the 2S server example, because it has nothing to do with dual/triple channel working or not.

My question is why we don't see any improvement in benchmarks, which very often extrapolate the differences in performance. When we see differences in benchmark utilities, we can be sure (or not) if technology is working, even at 12GB/s.

Quote Originally Posted by AuDioFreaK39 View Post
Why is the CPU 22MHz faster for the dual-channel benchmark and 11MHz faster on the memory than the triple-channel benchmark?

http://xtreview.com/addcomment-id-66...s-SMT-OFF.html
http://xtreview.com/images/corei712and3chanel06.png

and also, shouldn't the dual-channel benchmark pwn the single much more than this?
No idea why they are not clocked exactly the same, not my benchmarks anyway. The small difference in frequency is NOT the reason why the differences in performance are this small, though ;-).

And yeah, the xtreview benchmarks are screwed, I think.