Not bad I guess, I still prefer something that doesn't take extra space in my case and slots on my motherboard.
Does Sata3 bring anything new but higher transfer speed anyway? Wasn't there some improved NCQ iirc?
Not bad I guess, I still prefer something that doesn't take extra space in my case and slots on my motherboard.
Does Sata3 bring anything new but higher transfer speed anyway? Wasn't there some improved NCQ iirc?
Would be nice to be high, but I have to disappoint you: you are behind
the times a bit. See my earlier post in this thread: Pliant introduced
SAS-6Gbps SSDs with 525 / 340 MBps r/w speeds. I think that within
half a year, there will be consumer SATA-6Gbps SSDs with similar
speeds (though they will still be expensive).
Intel engineers said Netburst was good...
Not yet on the visible horizon, but up and coming.
DMI is ancient. The problem is not with two SATA-6Gbps drives on
DMI, but the fact that not just all I/O traffic (up to 6 drives RAID
array on P55 PCH, Gb Ethernet, sound, etc) go through that, but also
that all PCIe slots (beside the x16 or 2 x8 from the CPU PCIe controller)
are also branched off it. Why couldn't Intel just put 21 PCIe 2.0 lanes
in the processor, that way there'd be x16, x4 and x1 off the cpu, so
the latter two would be offloaded from the P55 PCH and so it would
have enough bw for I/O and drives.
Yes, it was good - in the timeline of Athlon XP. Or in the same way we can blame AMD engineers for Athlon64 (which has the same relation to Core2 as Pentium4 to Athlon64).
Taking your logic, we should get rid of QPI and HT as well. Why? Simple. Lets see:Quote:
DMI is ancient. The problem is not with two SATA-6Gbps drives on
DMI, but the fact that not just all I/O traffic (up to 6 drives RAID
array on P55 PCH, Gb Ethernet, sound, etc) go through that, but also
that all PCIe slots (beside the x16 or 2 x8 from the CPU PCIe controller)
are also branched off it. Why couldn't Intel just put 21 PCIe 2.0 lanes
in the processor, that way there'd be x16, x4 and x1 off the cpu, so
the latter two would be offloaded from the P55 PCH and so it would
have enough bw for I/O and drives.
2000MHz HyperTransport bandwidth - 8GB/s in each direction (16GB/s agregate (bi-directional))
x16 PCIe 2.0 bandwidth - 8GB/s in each direction (16GB/s agregate)
Which mean that one graphics card alone saturates all the bandwidth of HT. And what about mouse, ethernet and... yes... storage (sata3, sata6, raid or whatever). And last, but not least, what is the point of the second x16 PCIe slot when the bandwidth of HT is already saturated by the first x16?
I would say that total CPU data bandwidth of linnfield is greater then the data bandwidth of Phenom II, since Phenom II has only HT (8GB/s each direction) whereas Linnfield has PCIe x16 2.0 (8GB/s each direction) + DMI (1 GB/s each direction dedicated for I/O).
Not even then -- clock for clock, it lost badly to it's predecessor, P-III.
NetBurst was such a dead end, that it's successor, Core, was descended
from P-III, not P-4.
First, your numbers are way off.
http://en.wikipedia.org/wiki/List_of...Computer_buses
See HT3.0 and HT3.1 at the end of the list. Even if those are 32-link
numbers, 16-link numbers are still much higher than PCIe 2.0 x16 bw.
Second, you make the mistake of aggregating Lynnfield's bw whereas
you can't. DMI is limited to x4. Even if it is saturated while the x16 to
the graphics is not, it still won't get extra bw, it will stay limited to x4.
Some1 needs to make a PCIe2 8x card with lots of SATA3 + USB3 ports.
It was not supposed to be faster clock for clock. Netburst was designed as "speed demon" while Athlon/Pentium were designed as "brainac". It was arguable what style of cpu design is better but you will never know until you try it. BTW many technologies that have emerged in the Netburst, now successfully used in Nehalem (such HT, Loop detector or prefetchers).
No. My numbers are OK. Check it up again:Quote:
First, your numbers are way off.
http://en.wikipedia.org/wiki/List_of...Computer_buses
See HT3.0 and HT3.1 at the end of the list. Even if those are 32-link
numbers, 16-link numbers are still much higher than PCIe 2.0 x16 bw.
http://en.wikipedia.org/wiki/HyperTransport
Look at the table "HyperTransport frequency specifications"
Max. Bandwidth at 16-Bit unidirectional for 2.6GHz is 10.4GB/s which means that for 2.0GHz it will be 8GB/s in each direction.
PCIe:
http://en.wikipedia.org/wiki/PCI_Express
PCIe 1.x is often quoted to support a data rate of 250 MB/s in each direction, per lane. This figure is a calculation from the physical signaling rate (2.5 Gbaud) divided by the encoding overhead (10 bits per byte.) This means a sixteen lane (×16) PCIe card would then be theoretically capable of 16×250 MB/s = 4 GB/s in each direction. Which means 8GB/s in each direction for PCIe 2.0.
Right. It can't. But at least it doesn't interfere with graphics when it needs full bw. Also graphics often more bw hangry then storage.Quote:
Second, you make the mistake of aggregating Lynnfield's bw whereas
you can't. DMI is limited to x4. Even if it is saturated while the x16 to
the graphics is not, it still won't get extra bw, it will stay limited to x4.
That way they couldn't market it as great new achievement inyear or two when they actually decide to put it inside cpu :D And beside there's already pci-e x16 link that they must have for there proprietary graphics (igp) they'll put on chip so they're would need more investment in die-space if they put 21-24 or more pcie lanes directly on chip. And they still need to sell their chipsets for some reason, if they'll put all that pcie lanes why not put 10GB NIC 10x USB 3.0 controller aso on chip also :p:
no for x58? sad
they will sell some of these too early adopters
no thanks personally. SATA 3 needs time to growup
Asus does a good job :)
Everything, I like ASUS products :)
I would like to get one of these controllers and use it on my x58 MB. I don't see why it would not work. I want it because it is 4x PCI E
I see no use for this :|
Waiting for native solutions.