A lot of talk on the X58 chipset for this, but any news on the chipset for the dual-processor version? Really interested if there are any block diagrams of it that were leaked to the web yet. Looking for a good server/workstation board.
Printable View
A lot of talk on the X58 chipset for this, but any news on the chipset for the dual-processor version? Really interested if there are any block diagrams of it that were leaked to the web yet. Looking for a good server/workstation board.
So that would imply that for the pci-e slots they would be directly off a particular cpu (say 3 slots for cpu 0 and 3 for cpu 1) and if you had to do data transfers between those you would have to cross over from one cpu to the other. Not that bad in itself actually, but would require to fully populate the board with cpu's to use all the slots (in my case no big deal as I am planning that anyway). Still knowing the number of pci lanes for each cpu dedicated to the pcie slots would be useful, and what the speed is for the ibexpeak which would need to be tied to each cpu as well in addition to the bandwidth between the cpu's to handle process migration in addition to i/o from attached periphs.
All in all, more information would be real nice. ;)
Tylersburg is for both single and dual socket.
http://pc.watch.impress.co.jp/docs/2...gai388_09l.gifQuote:
* Tylersburg-24S – 24 PCIe lanes, 1x QuickPath Link
* Tylersburg-24D – 24 PCIe lanes, 2x QuickPath Links
* Tylersburg-36S – 36 PCIe lanes, 1x QuickPath Link
* Tylersburg-36D – 36 PCIe lanes, 2x QuickPath Links
http://babelfish.yahoo.com/translate...rUrl=Translate
Just one more QPI link.
Pretty pictures
The Flextronics board looks very 'fancy' if you will, blue pcb, yellow pci-e slots.. Maybe an ST2 competitor?
He's on about the dual-core Havendale and quad Lynnfield solutions. The dual socket version will of course have QPI and the chipset Tylersburg.
Thanks! That's exactly what I was looking for. So I have to wait for Tylersburg-36D it looks like as well as to get some performance specs on the implementation of tylserburg itself to see if it can actually handle the traffic of all those lanes unlike the current server/workstation boards.
The ones with Ibexpeak will be Lynnsfiels and Havendale. Not Bloomsfield.
Bloomsfield is a quadcore with tripple channel memory and a QPI link to a PCie switch and again with DMI from there to the ICH10.
Lynnsfield is a quadcore with dualchannel memory and an ondie 16 or 20 lane PCIe controller (I cant remember). It will use a DMI interface (2GB/sec each way) to the Ibexpeak that is essentially the same as ICH10.
Havendale is like Lynnsfield. But a dualcore and with GPU (IGP Class) on the CPU aswell.
So for Lynnsfield and Havendale its impossible to run more than 1 x16 or 2x x8 for GFX. And I doubt many would offer more than 1 x16 slot on the MBs.
I thought that Gainstown was the dual socket version and bloomsfield was the single socket version has that changed? I still don't like only having 36 PCIe lanes (really want something like 48-56 or so which is why AMD is still in the running but can't ignore the processor performance of intel).
Bloomfield is the single-socket QPI-supporting processor, Gainestown is the dual-socket QPI-supporting processor, Lynnfield and Havendale are the DMI-connected processors with onboard PCI-E. Going by your current system's specs those are not really something to be interested in, those are more for Dells and laptops from the specs they've currently released.. Besides, only Bloomfield and Gainestown is coming this year. Those other ones are due later next year.
If you look at the diagrams in my link it's also possible to have two Tylersburg chips on the board connected by QPI and to the two CPUs, for 64 PCI-E 2.0 lanes.
This should clear up some confusion...
http://members.shaw.ca/virtualrain/n...alem-chart.png
More here, including diagrams of the chipsets...
http://www.nehalemnews.com/2008/04/nehalem-faq.html
BEEEECKTONNN
Would anyone be able to buy that mofo?
Look at the prices of Tigerton and Dunnington for your answer to that..
Besides, dual socket systems will finally be able to run regular DDR, why ditch that by buying immensely expensive processors and then running FB-DIMMs? For a desktop/workstation it doesn't make sense no matter how much money you have. :rofl:
Thanks, and to the above specifically, that would be bloody awesome if some manufacturer would actually produce one. The biggest problem I've seen so far (server & workstation boards) is a complete lack of I/O performance. Having 1 or 2 slots run well is good for desktop but not when you really want to push data.
So has it been confirmed whether or not the mainstream as well as performance and server segments can be overclocked yet?
There have been numerous references to overclocked Bloomfield CPU's at Computex and elsewhere.
I'm personally expecting the 3.2GHz part to be an Extreme branded processor with unlocked multi. The other Bloomfields will likely have upward locked multi's.
More info here on how the clock/power domains will likely work...
http://www.nehalemnews.com/2008/05/e...r-domains.html
No further word other than pure speculation on Lynnfield overclockability. Even if the clock is on package for Lynnfield, there's no reason it still can't be exposed in BIOS to allow adjustment. Just because they can make it unadjustable doesn't mean they will.
I believe they're they may be thinking DDR3 should be mainstream by the time the mainstream processors and boards are released and I sure hope so.
But at this point in time it seems to be moving along fairly slowly in my opinion.
Then again wouldn't the cheapest board/Processor be using the cheapest components air go DDR2?
But am I wrong in assumeing BloomField will most likely be extreme versions only?
I'm talking more about the Q9300, Q9450, Q6600, E8400 of this new generation of CPU's being overclockable.