It did for Anand, check out the link?
I don't care if it is a BTX_LX chip, the end result all that matters. Don't want any chip like nVidia's any mobo I buy.
Printable View
The bridge chip is absolutely not needed, the lock is in the drivers. That, and also the fact that the cards work better without the chip, this is just a dirty solution to enable SLI on X58, just like in Skulltrail. An unneeded chip in the chain can only make things worse.
I don't know about the 4870X2 until I test it, but I'll tell you one thing: the 3870X2 does NOT work well. You get about the same FPS of 3870CF, and you get the exact same problems and incompatibilities. Now I have hopes for the microstuttering problem after looking at the early Sampsa's tests, but the 'you need a CF profile for each game, and it has to work well' thing still scares me, and I bet I'll face the same situation I faced when I used 3870CF and 3870X2 for a few weeks with 4870CF and 4870X2. The Crossfire Sideport has nothing to do with this, as it's only a connection between the cards.
I know, remember I called it a DRM ROM chip? I also complained about it down grading Skulltrail's PCI-E's PCI-E 2.0 support back to PCI-E first Gen or 1. Either way, I wouldn't want it on an X58 when I'd planned on using at least a 4850 X2 card. But thank you very much for the info:up:
Yes, I remember now :up: Ah man, sometimes I see the day when multi-GPUs will not be driver dependant and appear as a single chip to the OS... then I wake up :( :D
This bridge is just another example of how good NV is at business, earning money for everything. And how they don't care a :banana::banana::banana::banana: about customers dealing with their non-GPU chips crap.
The worst part is that there's a crowd ready to lap it all up and probably even think of it as an 'improvement' on the X58's PCI-E capabilities, because it says Nvidia on it.
Imagine if Nvidia made a laptop, a really really thin one, and a fancy looking phone. :rofl:
I'll wait and see on the 4870x2. I think a lot of folks are leery of driver issues and quality issues, that I'll watch some testing before leaping. But, I don't think the X58 is going to be good with the NF200 on it. New chipsets get their kinks and the last thing I want to track down is some problem that has nothing to do with the X58, just because of some crazy add on chip. If AMD/ATI and Intel can both do (or will do soon) dual GPU devices based solely on plug in x16 slots then Nvidia needs to keep up or shut up.
And there is one thing always to remember, no matter how much you might love dual ANYTHING right now, 6-8 months from now you'll be staring at the newest card and drooling and wondering why you were so crazy to buy dual cards the year before. Anyone remember the dual 7900's? The industry advances quickly so sometimes picking your current poison in a single format and ditching it in 6 months is better strategy.
Anyway, Nvidia is mucking around trying to make a buck and they will end up making less than will have been worth it by mucking around. And ATI has to prove they really have it together this time around. Profiles for every game is ridiculous as well.
So we will have to wait until 2009 for mainstream chipset (P55) and mainstream boards? :shrug:
So is 6gb 2x3 going to be an optimal format for Nehalem? If there are 6 memory slots 2 for each mem channel then it seems it would make sense to either populate 2 slots at a time and fill each channel up to it's dual capacity or else fill 1 slot for each channel.
Trying to sort this out.
Anemone, it will be better to fill each channel with just one module to keep Command Rate at 1T.
ty :)
It's not the same because Intel uses faster larger cache setups. Then if that fails, Intel uses Smart Cache and Smart Memory Access. 1T or 2T just is not that critical on Core 2 of any kind. These features almost remove most of the negative effects of Intel's FSB.
Nehalem is supposed to have even faster cache and less of it since large Caches also removed dependence on the FSB. No FSB means less cache is needed.
well, the entire point of cache is that you hope that the majority of your data is in your cache which is one die so you don't have to go off die and search for it in memory. That's where prefetching came in for Intel
You said that if the larger cache failed, then they'd use smart cache. but the only thing that smart cache was...was the ability to divide the L2 cache up among the cores.So i don't see how that would take over if your data wasn't in memory.
True but I meant failed as in couldn't hold all of the data. Then there's using it to assist new features that aren't even talked about much. Running two OS simultaneously, encryption, safety features and etc..... Just look at Vista sucking up CPU cycles and RAM. It eats up cache as well.
Little exposure again :clap:
http://i245.photobucket.com/albums/g...m/WPrime_G.jpg
...
Only 17Ghz NB :p:
You need a wider task manager :eek: