PDA

View Full Version : Interesting, Quad GPU's anyone?



EMC2
03-12-2005, 12:58 AM
Was reading some news from CeBit and ran across this (http://www.xbitlabs.com/news/video/display/20050309164235.html) article over at xbitlabs. While the dual GPUs on a card is well known, the interesting aspect was NVidia's mentioned SLI support for them.

Quad GPUs :slobber:

Kanavit
03-12-2005, 02:40 AM
wow, Quad SLI, that makes 8-way gpu for workstation server computing 3d database. nVidia will take over silicon valley if this is true.

caLume
03-12-2005, 09:44 AM
nice :)

i imagine goodlookin waterblocks on this card"s" :D

Sentential
03-12-2005, 09:48 AM
I have heard that the SLI technology implimented can support up to 16GPUS :slobber: So its not suprising that they are tying to move to 4 GPUs

IvanAndreevich
03-13-2005, 09:50 AM
And tell me what kind of games require this? ;) So unpractical IMO. Good for benchmarks - yes.

saaya
03-13-2005, 12:04 PM
yes, asus is indeed thinking about a 4 vpu solution they told me, dont think it makes any sense though... think of the power consumption and thermal density... and then its all cpu limited anyways...

unless its on a dual opteron sli board like the one from msi i saw :D
its normal atx form factor and has 6 dimm slots! really amazing board :D

jkabaseball
03-13-2005, 12:09 PM
are they going to make a new case for 2 PSU for all those videocards?

saaya
03-13-2005, 12:16 PM
huh? what do you mean?

D_o_S
03-13-2005, 12:28 PM
are they going to make a new case for 2 PSU for all those videocards?

CM Stacker can be fitted with 2 PSUs.

saaya
03-13-2005, 12:34 PM
all you need is a server psu, there are atx server psus afaik, so thats what you need.
if you only run one opteron cpu and 2 cards in sli even a normal atx psu will be fine the engineer from msi said :)

and you dont have a stupid switch card on the board or jumper like on the dfi!
you can enable and disable sli in the bios and even in windows!!! :D
it had a core cell chip on it and the engineer said the board was built for the hardcore enthusiasts in the first place and only for powerfull workstations in the second place, so he said we should expect very nice tweaking options in the bios and nice vltages! :D :D :D

EMC2
03-13-2005, 01:18 PM
--- Ivan ---

Regarding NVidia based products... today, none, two Ultras is enough for anything, two 6600GTs for most things... tomorrow, who knows ;) You have to remember that game developers intentionally tone down the GPU horsepower needed to what is available at the time. If more powerful video engines are available, they'll adjust accordingly ;) At this point, two dual 6600GTs, if the price point was right (say $300 a card), would be a great thing IMO. That price point isn't that far out of wack either...


--- jkabaseball ---

Naw, just one of these would be fine ;)

http://img8.exs.cx/img8/3276/850turbohi5vc.th.jpg (http://img8.exs.cx/my.php?loc=img8&image=850turbohi5vc.jpg)

• 850W continuous and 950W peak power
• NVIDIA® SLI™ READY component
• Four +12V rails @ 17A ea. (12V1-V4=54A/62A pk.)
• Extended ATX form factor fits most std cases
• High-efficiency design (85%) with .99 PFC
• The tightest voltage regulation (+VDC @ 1%)
• Dual PCI Express video power connectors
• 15 drive connectors (6 SATA, 8 Molex, 1 mini)
• The industry's strongest warranty (5 years)


--- Saya ---

For anyone giving thought to a quad (or greater) GPU system, neither thermal nor power density is an issue... both "easily" resolved (spelled "not air cooled"). Need to also keep in mind that as the head long rush to smaller feature size continues (die shrinks), the power/thermal requirements also will scale downward.


--- ----

There are also other uses than just games and benchmarks... they just don't happen to be usually related to the "hobbyist" side of the equation.

Besides... isn't this the XtremeSystems forums :stick: :D

TheJackal
03-13-2005, 01:50 PM
wow

enough said

jkabaseball
03-13-2005, 02:37 PM
opps, I can't read, it thought it was 4 cards, not 4 GPU. ANy of those would be fine then.

elec999
03-13-2005, 02:47 PM
--- Ivan ---

Regarding NVidia based products... today, none, two Ultras is enough for anything, two 6600GTs for most things... tomorrow, who knows ;) You have to remember that game developers intentionally tone down the GPU horsepower needed to what is available at the time. If more powerful video engines are available, they'll adjust accordingly ;) At this point, two dual 6600GTs, if the price point was right (say $300 a card), would be a great thing IMO. That price point isn't that far out of wack either...


--- jkabaseball ---

Naw, just one of these would be fine ;)

http://img8.exs.cx/img8/3276/850turbohi5vc.th.jpg (http://img8.exs.cx/my.php?loc=img8&image=850turbohi5vc.jpg)



--- Saya ---

For anyone giving thought to a quad (or greater) GPU system, neither thermal nor power density is an issue... both "easily" resolved (spelled "not air cooled"). Need to also keep in mind that as the head long rush to smaller feature size continues (die shrinks), the power/thermal requirements also will scale downward.


--- ----

There are also other uses than just games and benchmarks... they just don't happen to be usually related to the "hobbyist" side of the equation.

Besides... isn't this the XtremeSystems forums :stick: :D
If I may ask, whats the price of this psu.
Thanks

EMC2
03-13-2005, 03:36 PM
--- Saaya ---


unless its on a dual opteron sli board like the one from msi i saw

Now that would go well... use dual-core optie's, 4 CPU cores, 4 GPU cores :hehe:


--- elec999 ---

Quite expensive... $429.00 :eek: But then if you can afford the other toys to go with it, it would likely be the last PSU upgrade you would ever do.

Kanavit
03-13-2005, 05:54 PM
EMC2, where can i buy that SLI ready PSU? Have a link?

jkabaseball
03-13-2005, 06:10 PM
http://www.pcpowerandcooling.com/products/power_supplies/highperformance/index.htm

EMC2
03-13-2005, 07:42 PM
--- Kanavit ---

You'll have to wait a couple of weeks though for that particular PSU :( Currently they have a huge OEM backlog... last time I talked to them, end of March was the probable time frame for end users.

They do have 510 SLIs though, about $230.

saaya
03-15-2005, 12:00 PM
--- Saya ---

For anyone giving thought to a quad (or greater) GPU system, neither thermal nor power density is an issue... both "easily" resolved (spelled "not air cooled"). Need to also keep in mind that as the head long rush to smaller feature size continues (die shrinks), the power/thermal requirements also will scale downward.

we are talking about the current nvidia vpus here, not future vpus or die shrinks :) and asus wont go watercooling from what ive heard.

leadtek will also go dual 6800U on one card it seems, they even seem to be a step ahead of asus, the interesting thing will be compability now since leadtek doesnt have their own board they can make the dual card run at.

yesterday at the intel party i met the gigabyte guys and asked an engineer and he said the gigabyte 3d1 is compatible with any nf4 sli board, the only thing you need to get it working is a bios update. but other manufacturers dont seem to want to change their bios to enable gigabytes and other dual card solutions YET.
i hope that asus used the same design as gigabyte so all dual cards will work on gigabyte and asus boards wich will hopefully push other manufacturers to update their bios files as well and make those cards work in their boards.
will ask dfi about it tomorrow :)

EMC2
03-15-2005, 07:45 PM
we are talking about the current nvidia vpus here, not future vpus or die shrinks :) and asus wont go watercooling from what ive heard.


Was thinking present and future cards, but point taken :)

Since when did stock cooling ever stop anyone? (regarding the ASUS card)

Regarding dual 6800u cards, I can't really see that in an SLI setup without non-air cooling because of the thermal density issue (possible, but eewwwww, bad OC environment)

I would think MB makers (like DFI) that aren't heavily into also making gfx cards would welcome with open arms dual gpu cards to position themselves a step above others ("look, we are compatible with all dual GPU cards")

Will be interested to see what DFI's response is, though I can make a guess.

Prost! :toast:

saaya
03-16-2005, 02:30 PM
the asus dual card will work on gigabyte boards and vice versa, they work together on this :) have to check again with leadtek, but i think asus made the leadtek design, so it should be compatible as well :)

asus will most likely go for dual 6800gt cards because they are significantly cheaper than ultras and run cooler. and they are currently still checking out what cooling they will put on the card, they are talking to zalman about this i think.
i recommended innovatek, swifftech and dangerden to them :D
they were really interested and said they will talk to them :)

EMC2
03-16-2005, 05:58 PM
Nothing from your talk with DFI Saya?

/me thinks Saya has the XtremeTease Disease :p:

saaya
03-17-2005, 09:03 AM
nah im just super tired man...... sorry i didnt post anything yet, i got home this morning, slept a few hours, took a shower ate something and will now go to sleep again LOL im really super tired and each step hurts because of my tortured feet :lol:
imagine walking around almost 10 hours a day for one week :0

and what should i tell you from dfi? theres not much new they told or showed me except for the nf4 sli board for intel cpus and the 939 nf3 agp board.

the nf4 sli for p4's will most likely have the same voltages (except for vdimm of course since 4v would be a little high for ddr2 :P) and settings (if not even more stuff to tweak! :D) and it doesnt have the vdimm jumper anymore. the jumper was put on the nf4 939 board since a64s are sensitive to vdimm so they wanted to add an extra step you have to do if you want high vdimm so people dont just try different bios settings for high vdimm without knowing what they are doing and smoke their a64s and eventually the board.
i think it doesnt have the jumpers for sli either, its all done in bios or in windows now iirc. the board layout is of course slightly different.

the nf3 939 agp board looked very nice and is a great alternative for people who dont want to go pciE yet! :)
pretty much the same board with slightly different layout and pretty much the same voltages as the nf4 939 board :)

the voltages and settings arent decided yet, but dfi told me they would see lower voltages and less tweaks in the bios as a step backwards wich they dont want to do. they told me to expect the same or even more tweaking options on every new board they will bring out :)

oh and unfortunately dfi wont be making videocards :/
they are checking out the market and will eventually do it if they see a demand thats big enough, but atm they dont have any plans.

danny from dfi whom i interviewed was very very nice and we had a really nice talk for almost an hour :) i will post the interview and pics later (i had some probs with my cam initially, i hope i have the pics, could be i dont have them :S in that case i will ask a friend who took pics there and will post those pics)

oh, and dfi had a pentium m board running at their booth as well :)

....and a watercooled and overclocked system cooled by asetek waterchill... dont remmeber the specs now, but i def have pics of that :)

saaya
03-17-2005, 09:58 AM
ah, i forgot to mention, dfi is going to have skype support so you can talk to them or chat with them, sound svery promising! i hope they all speak english as well as danny from dfi though :D

perry_78
03-17-2005, 10:40 AM
Thermal density! You could have a little power generator from the STEAM that would make watercooled :slobber:

No way, ATI is the way to go good people ;)

kryptobs2000
03-17-2005, 12:14 PM
I personally think this is a bad step. Are we really just going to stop trying to improve what we have and just start adding more? We can't make processors faster so we go dual core, we can't make videocards faster so we go dual core, and dual videocard? I like the dual processor idea, and even the dual videocard idea, but sli imo, isn't a good idea, especially when you then go to quad sli. This just seems like nvidia dosn't feel like innovation, and just try to sell more cards. Crappy idea imo.

EMC2
03-17-2005, 07:15 PM
--- Saya ---

Sorry about your sore feet man, thought you had just forgot :) I understand... try 12 hours on a metal "floor" for weeks on end sometime, lol. Just look at it this way, you'll be ready for the next VolksMarch in the mountains :p:

I asked what DFI said because in post #19 you talked about asking DFI if they were going to support the dual-gpu video cards in their MBs. (the info on skype and future MBs supporting nF4 like options is nice info... don't care about intel, lol)

vielen Dank Saya :toast:


--- Juliette ---

Max power on a PCIe slot is 150W by spec... so you're talking 300W for 2 cards, hardly steam generating heat... regarding ATI, they do make great cards, but they too are going dual card (MVP).


--- kryptobs ---

If the intent was to stop/slow down innovation/improvements I agree it would be a bad thing... but I don't think that is the case and never will be as long as there are at least 2 companies at the top. BUT it is IMO a great way to be able to get more horses quicker... no matter how good a single GPU becomes, you can have more... just like with multiple processors and soon, dual-core CPUs. It also allows the creation of great setups using the lower horse GPUs... present day for example, dual 6600GT cards, giving more bang for the $.

Is it the end all/be all... hardly. But it is a nice option I think :)

saaya
03-18-2005, 11:48 AM
Juliette, ati is going sli as well, and we will see dual ati cards as well, so it will be the same as nvidias 4 vpu sli...

EMC2, pciE power standard is 65 or 75W, not 150! if it would be 150W then no videocard would need an extra molex plug...

and about sli and less innovation... well look at it this way, what new innovation could nvidia or ati integrate into their vpus? wgf 2.0 wich is like dx10 is the next thing and its comming in 1 year and doesnt really bring a lot of innovation from what ive seen... sm3.0 will be all you need for a loooong time, so sli actually makes sense here, i wonder why they didnt go dual core though (i mean pseudo dual core, with two cores on one package), would be much cheaper and even faster than sli...

EMC2
03-18-2005, 05:30 PM
EMC2, pciE power standard is 65 or 75W, not 150! if it would be 150W then no videocard would need an extra molex plug...


Nope, 150W Saya... the 75W is for a PCIe card without the extra power connector on the board. Cards with the extra power connector are allowed 150W.

Specifically: 5.5A @ 12V and 3A @ 3.3V from the PCIe connector (75.9W), plus an additonal 6.25A @ 12V from the extra power connector on the card (75W).

(reference 75W PCIe ECN change notice, V0.2, of 3/24/03 to the PCIe spec, and PCIe_x16_150W_ATX_1.0, Rev 1.0, Oct 2004)

:toast:

saaya
03-19-2005, 05:56 AM
hmmmmm how is a slot rated for 150W but only 75W are allowed to come through the slot? then its not a slot power rating but more like a thermals in the case rating, no?

never heard about this before, very weird rating... so what if a card sucks more than 150W overall? nothing will happen... so whats this rating supposed to be good for :confused:
if a card sucks more than 75W from the slot the mobo will shut down or the system will die, so that rating makes sence...

EMC2
03-19-2005, 07:50 AM
hmmmmm how is a slot rated for 150W but only 75W are allowed to come through the slot? then its not a slot power rating but more like a thermals in the case rating, no?

No :) It isn't a slot rating, it's a card rating. The slot is only rated at 75W, it is only through the use of the extra power connector that a PCIe card can use 150W. The new spec also specifically states that the additional 12V power rail available via the extra connector "can not be electrically shorted" (connected) to the 12V rail from the PCIe 16x slot connector.



never heard about this before, very weird rating...

Hopefully not so wierd to you now :)



so what if a card sucks more than 150W overall?

It would really suck :p:

Don't like that answer? Ok, how about, sirens would start blaring, all the pics and games on your HD would get erased, your girlfriend/wife/significant other would leave you, and Interpol would be knocking on your door within 5 mins. :lol:

Seriously... simple answer is it would not be compliant with the new spec.. and as such couldn't be certified. However, being the overclockers we all are, it would come down to if your PS, and to a lesser extent, your MB could handle it (since most card vendors would likely take OC'ing into account and design their cards so that the extra current draw from OC'ing came from the extra power connector).


nothing will happen... so whats this rating supposed to be good for :confused:

See scenario 2 above :hehe: ...like all specifications, they are guidelines for OEMs to insure that everyones toys play together properly without letting the magic smoke out ;)


Oh, and notice I avoided the use of "SLI power connector" as some use... the specification dubbs it as being "the PCI Express x16 Graphics 150W-ATX power connector" (the committee loves to be wordy). Catchy name, no? :rolleyes: In the interest of sanity worldwide I hearby advocate it as "PCIe150 cnx" or "Dat 6 pin thingy", poll to be initiated shortly :p: lol

:toast:

saaya
03-19-2005, 09:37 AM
i still dont get what this rating is supposed to be good for :confused: lol :D
do you have a link to it?

how much does a single 6800U use again? ~80W right? so a dual card could break the 150W already.... who made this 150W spec and for what reason :confused: it sure wasnt ati and it sure wasnt nvidia either... its just confusing :D

EMC2
03-19-2005, 06:06 PM
I have a link, but unless you or the company you work for are a member of the PCISig, it won't work, sorry. Not sure how they handle non-member requests for specs (I think they charge $). The exact name of the spec is "PCI Express™ x16 Graphics 150W-ATX Specification Revision 1.0", dated Oct 25, 2004.

Other than the mechanical sizing info and motherboard initialization sequencing, pretty much already gave you all the info in it, LOL ;) (mechanical is 4.376" x 12.283" x 1.37"... they allow the card to protrude into the adjacent slot... mainly for the cooling needs)

You're guestimate of a dual 6800u is a little off... there is a power savings advantage when putting both on a single card due to shared circuitry ;) (small, but enough to get under the spec). Of course, if you set the default clocks and voltages to the right value, you can always slip under the limits as sold. What someone does when they get it in their hands is completely up to them, LOL. :hehe:

Personally, I see the dual cards as more for lower level GPUs (like the 6600GT) for the most part, but there will be dual 6800u beasts... and other beasties in the future.

Regarding who made it, the PCI-SIG group wrote it, same one that has written every PCI related spec since back when, including all the PCIe ones (website is www.pcisig.com). In this case it was the "subcommitee on PCI-Express", lol. For the most part, sitting commitee members are from "the industry" at large. Most of the names you know have a representative on the various commitees, including ATI, nVidia, Intel, etc., and a lot you probably haven't heard of (phoooooyeeee... wait a sec, have to wash my mouth out after saying the 'I' word :p)

Why they wrote it... because there were influential commitee members that had designs that needed more power. Nvidia being in that list... the 6800u would have had to have been "crippled" some without it. The original bump on the PCIe spec for the slot itself that bumped it to 75W was authored by Dell (soap time again, lol).

If there is something specific that is causing the confusion, let me know, maybe I can clear it up if this didn't clear the remaining fog.

:toast:

saaya
03-21-2005, 12:12 PM
thx :toast:

i still dont get why they limited it to 150W ... why 150? why not 160 or 180 or 2000? lol :D

EMC2
03-21-2005, 07:33 PM
No problem Saya. Sorry if sometimes don't quite follow the Q or do a pp job of answering ;)


Why 150? 'Cause it's twice 75 maybe? :p:

2000!!!! :eek: YIKES!!!!!!! LOL...

Probably a trade-off between the power some wanted and what the thermal analysis said ;) Who knows... it might go up again... give it 6 months :hehe:

:toast:

saaya
03-21-2005, 08:00 PM
well think about it, the thermal density isnt such a big problem actually since we have two vpus here, so it can be spread pretty well... then the only other thing that could be limiting would be the psu, but why 150W? i mean 150W isnt even that much... think of a 140W prescott plus 150W graphics plus lets say 50W system, thats a killer system and we are still only at 350W wich even semi crappy 500W oem psus should be able to handle... maybe its a 12v thing... that would make more sence... hmmmm

or maybe its a molex cable thing? what was the max rating in watts for a single molex cable of a psu again? 75W or 40W maybe? so they set the standard as a card may block one molex cable or two moley cables max... ?

THE JEW (RaVeN)
03-21-2005, 08:29 PM
Should be possible.

Look at Ati's military multi-GPU cards. They've been doing it since the 9700 pro days.

EMC2
03-21-2005, 08:50 PM
FYI... not a connector issue Saya :) The individual pins (smallest for the type connector used) are rated at minimum 6 Amps each in the shell size used, and go up to 8 or 10 Amp rated pins as I remember. The wire specified can do 16 Amps (chassis wiring rating).

The 12V side is different though. Remember this is with dual cards in mind as well. Recommendation is for about 34 Amps on 12V rail, and from my system calcs, I would be more comfortable with 45 (36 and a 25% reserve on a killer system).

You are right, 150W isn't a ton thermally, but two vid cards sandwiched side by side requires extra attention to be made to the solution. Still quite doable :)

Remember too tho, thinking about your average Joe/Jill consumer is taken into account. While for OC'ers it's no problem, Ma and Pa might not quite handle it right (stuff it all into a mini-case with 2 92mm case fans next to the heater vent :lol: ).

saaya
03-21-2005, 09:02 PM
yeah but the average joe wont buy a 150W card anyways, so i doubt thats what they were thinking about... it has to have something to do with power, not heat, im pretty sure of that... maybe dell cant get good psus that would be needed for over 150W for a card at a good price or something like that.... or the psus in general would cost too much if you take more than 150W for a card into consideration... hmmmm

or maybe its just because its the double of 75W... :D

EMC2
03-22-2005, 10:24 PM
You have a good point Saya, most won't... but as a supplier you still need to take it into account :) (and I was exaggerating a bit for humour about ma/pa)

BTW, Dell didn't do the 150W... just the 75W. But... there are OEMs using some beefy PSUs... PCP&C isn't selling their new 850W SLIs yet to the general public because OEMs are sucking up all they can make right now.

And it just might be because it's double :D

There's a long story I won't tell here, but the bottom line is this... the spacing of the rails for the American railroad system is a result of the distance between the wheels on the old wagon wheels that were used to move out into the wild west waaaaaaaay back when... so the why for a given spec may be stranger than you might think :lol:

matt9669
03-22-2005, 10:36 PM
There's a long story I won't tell here, but the bottom line is this... the spacing of the rails for the American railroad system is a result of the distance between the wheels on the old wagon wheels that were used to move out into the wild west waaaaaaaay back when... so the why for a given spec may be stranger than you might think :lol:Yes, unfortunately backwards compatability is the main reason for the oddities we see in many of today's official specifications . . .

saaya
03-23-2005, 02:01 AM
x86... :D

matt9669
03-23-2005, 02:13 AM
x86... :DVery true, but don't forget what happened to Itanium . . .

Very interesting CPU BTW, do you know it doesn't even do branch prediction? It simply calculates and stores the results for all possible program branches - talk about brute force! :rolleyes:

Anywho, sorry for the :off: - the main problem with four GPU's in one machine is going to be power, plus overall efficiency is reduced when you have to split the load between four processing units - same with 1 -> 2 -> 4+ CPU's . . .

saaya
03-23-2005, 02:53 AM
i dont think the overall efficiency will be affected, you just waste way more fram buffers as each of the vpus has acopy of the same stuff in their frame buffers...

so thats 768MB of frame buffer going down the toilet...
i hope they will go for a shared frame buffer, but it doesnt look like it, at least not on the card... maybe through the pciE bus....

matt9669
03-23-2005, 03:05 AM
i dont think the overall efficiency will be affected, you just waste way more fram buffers as each of the vpus has acopy of the same stuff in their frame buffers...

so thats 768MB of frame buffer going down the toilet...
i hope they will go for a shared frame buffer, but it doesnt look like it, at least not on the card... maybe through the pciE bus....
The more you divide a task, the less efficiently it is handled . . . 4 1GHz CPU's are slower than 1 4GHz CPU, a 200MHz 64bit bus is more efficient than a 100MHz 128bit bus, utilization of resources is more difficult when you have more resources . . .

In the case of split frame rendering, the system has to dynamically load balance four quarters of the screen, and then recombine the final image. Sure, you could switch to static load balancing, but if most of the work ends up in the top/bottom of the scene . . .

For alternate frame rendering you've got an additional 3 frames of delay between input and output, plus the task of pushing 256MB of textures/vector data to each card, twice as often as in a dual GPU system . . .

And no, they certainly wouldn't share a common frame buffer, it would be impossible to maintain high enough bandwidth between the GPU's to use the memory effectively, plus then you've got memory coherency issues - similar to when, in a quad Opteron system, one CPU needs data from the memory/cache of another CPU, and it may have to pass through an additional CPU on the way . . .

saaya
03-23-2005, 10:01 AM
For alternate frame rendering you've got an additional 3 frames of delay between input and output, plus the task of pushing 256MB of textures/vector data to each card, twice as often as in a dual GPU system . . .

And no, they certainly wouldn't share a common frame buffer, it would be impossible to maintain high enough bandwidth between the GPU's to use the memory effectively, plus then you've got memory coherency issues - similar to when, in a quad Opteron system, one CPU needs data from the memory/cache of another CPU, and it may have to pass through an additional CPU on the way . . .

no, the chipset in sli mode works like raid1 so you dont get higher latencies
and sharing the frame buffer would need to much bandwidth? how much bandwidth does a 6800U have? and how much bandwidth do 16 ociE lanes provide?
and yeah, accessing memory from the other card would mean a latency hit, but as well all know vpus super pileined architectures dont really care a lot about latencies, so the hit wouldnt be that big and you would get almost twice the frame buffer wich would probably make up for the bad latency if not even make it faster :)

and about cohereny issues, those things can all be worked out, im sure of that... it just takes a lot of time and effort, something nvidia has already invested in sli, and i doubt trhey will stop working on it now, especially since its so successfull.

matt9669
03-23-2005, 03:28 PM
no, the chipset in sli mode works like raid1 so you dont get higher latenciesWho gave you this idea? The same data is not sent to each card, AFR or SFR, SLI mode simply assigns 8 PCIe lanes to each card. If they recieved identical data they would produce identical output! What would be the point of that?

and sharing the frame buffer would need to much bandwidth? how much bandwidth does a 6800U have? and how much bandwidth do 16 ociE lanes provide?A 6800U at 550/1100DDR gets roughly 35GB/s, 16 PCIe lanes can provide 4GB/s in either direction (simultaneously, which is where the 8GB/s figure comes from). Don't forget the other card(s) have to recieve the data, interrupt their own processing to perform read/write operations, and transmit the data back across the interface before anything happens . . .

and yeah, accessing memory from the other card would mean a latency hit, but as well all know vpus super pileined architectures dont really care a lot about latencies, so the hit wouldnt be that big and you would get almost twice the frame buffer wich would probably make up for the bad latency if not even make it faster :)But you're talking about more than just a few CAS clocks, it wouldn't be any better than using main memory!

and about cohereny issues, those things can all be worked out, im sure of that... it just takes a lot of time and effort, something nvidia has already invested in sli, and i doubt trhey will stop working on it now, especially since its so successfull.Know how much of a b1tch coherency is? :lol: It just doesn't make sense to have GPU's spend valuable clock cycles hunting for data . . .

EMC2
03-23-2005, 07:57 PM
You two have been busy :D

Please don't let me interrupt... but might I suggest you two at least read this (http://www.anandtech.com/video/showdoc.aspx?i=2284), if nothing else ;)

Peace

Nightcover
04-24-2010, 08:47 AM
I just realized it took 5 years to get Quad-SLI support. - as in 4 PCBs. (not like two GTX295 but more like 4 GTX480)

N19h7m4r3
04-24-2010, 09:08 AM
Epic Thread Ressurection FAILS!

Nightcover
04-24-2010, 09:10 AM
true

DarthBeavis
04-24-2010, 09:15 AM
are they going to make a new case for 2 PSU for all those videocards?

They? Danger Den will make one with just a phone call. Mountain Mods already has dual PSU capablity

FischOderAal
04-24-2010, 09:22 AM
Holy necro...

tool_462
04-24-2010, 09:24 AM
They? Danger Den will make one with just a phone call. Mountain Mods already has dual PSU capablity

That post is from 5 years ago :p:

generics_user
04-24-2010, 10:09 AM
That post is from 5 years ago :p:

epic win?

lkiller123
04-24-2010, 10:39 AM
Nightcover, you fail:shakes:

SamHughe
04-24-2010, 10:50 AM
I just realized it took 5 years to get Quad-SLI support. - as in 4 PCBs. (not like two GTX295 but more like 4 GTX480)

And you decided to resurrect a 5 years old thread posted in the NEWS section to make a point?.
How is this news?
EPIC :banana::banana::banana::banana:ING FAIL!:down:
Not only this thread should be closed but you should be banned from the News section for quite a while. :shakes:

stangracin3
04-24-2010, 10:54 AM
and you decided to resurrect a 5 years old thread posted in the news section to make a point?.
How is this news?
Epic :banana::banana::banana::banana:ing fail!:down:
Not only this thread should be closed but you should be banned from the news section for quite a while. :shakes:

qft

ajaidev
04-24-2010, 10:56 AM
Quad crossfire worked with 3870/3850 in 2007.

So that means it took 2 years for quad gpu support Nvidia did the same this year "2010"

miahallen
04-24-2010, 11:00 AM
I just realized it took 5 years to get Quad-SLI support. - as in 4 PCBs. (not like two GTX295 but more like 4 GTX480)

Wrong :shakes:

On October 16, 2006, Nvidia released WHQL ForceWare 91.47 drivers for Windows 2000 and XP that support Quad SLI.

This was for the dual PCB 7950GX2 video card from nVidia

http://en.wikipedia.org/wiki/GeForce_7_Series

saaya
04-24-2010, 11:24 AM
Wrong :shakes:

On October 16, 2006, Nvidia released WHQL ForceWare 91.47 drivers for Windows 2000 and XP that support Quad SLI.

This was for the dual PCB 7950GX2 video card from nVidia

http://en.wikipedia.org/wiki/GeForce_7_Series
yeah but the guy was talking about 4x1card operation :)
dont get what the big difference is though...

and shame on you nightfall! you made me think emc was back for a moment there... sigh :(

funny to read my old posts here...
i sound quite rude in some of them... hah :D
im glad emc and matt didnt get me wrong heh... ah those were good times...

tool_462
04-24-2010, 11:28 AM
funny to read my old posts here...
i sound quite rude in some of them... hah :D

Don't worry, 5 years hasn't changed much :p:

Nightcover
04-24-2010, 12:05 PM
lol, sorry guys..

relax a little geez.

I was just reading some old threads cause I like to do that once in a while and then I read this thread and found it kinda funny.

saaya
04-24-2010, 12:20 PM
Don't worry, 5 years hasn't changed much :p::P
really? :/
sorry if i come off rude sometimes... :(


lol, sorry guys..

relax a little geez.

I was just reading some old threads cause I like to do that once in a while and then I read this thread and found it kinda funny.
the right thing to do would be to create a new thread in the computer discussion section or wamps and link to this thread :)
next time... :D

Nightcover
04-24-2010, 12:22 PM
:P
really? :/
sorry if i come off rude sometimes... :(


the right thing to do would be to create a new thread in the computer discussion section or wamps and link to this thread :)
next time... :D

I will do that thanks..

miahallen
04-24-2010, 12:30 PM
yeah but the guy was talking about 4x1card operation :)
Are U sure? :rolleyes:


I just realized it took 5 years to get Quad-SLI support. - as in 4 PCBs. (not like two GTX295 but more like 4 GTX480)

Nightcover
04-24-2010, 12:36 PM
Are U sure? :rolleyes:

yea, not like 2 GTX295, which are two PCB layered cards (at least when they were released) but 4 single cards.

saaya got it right :)

NKrader
04-24-2010, 12:36 PM
i want 16x 480gtx.. crunching on the grid 24/7.. might not get the goverments ok to put a nuclear power station behind my house to power that time machine with 1.21jiggawats

NKrader
04-24-2010, 12:38 PM
They? Danger Den will make one with just a phone call.

lol need quad psu support? call dangerden.. need anything call debbi.. it is all too easy to get custom jobs done with DD :clap:

miahallen
04-24-2010, 12:41 PM
yea, not like 2 GTX295, which are two PCB layered cards (at least when they were released) but 4 single cards.

saaya got it right :)

Fair enough...but in that case it was working with 285 GTX Classified edition from eVGA last year (beta drivers only IIRC) ;)

saaya
04-24-2010, 12:43 PM
yea, not like 2 GTX295, which are two PCB layered cards (at least when they were released) but 4 single cards.

saaya got it right :)i know what you mean, but well, hes right actually :D
you said 4 pcbs... and technically the 7950gx2 consisted out of 2 pcbs and supported quadsli heheheh

like i said before, i dont see the importance though... 1 2 or 4 pcbs, 1 2 or 4 cards... and if im not mistaken, back then we actually talked about a new dual gpu 6800U prototype from asus at cebit that supported sli, so you could in theory run 6800U quadsli back then... i have no idea if that ever worked though...

that was a massive card.. it was higher than any dual gpu card to date, had a tripple slot heatsink and it used a single pwm for both gpus iirc, quite an interesting design...

snoro
04-24-2010, 12:55 PM
Bloody hell for a moment i thought msi was doing a dual socket g34 opteron motherboard with overclocking ability.

Epic thread ress is epic

miahallen
04-24-2010, 01:21 PM
....and if im not mistaken, back then we actually talked about a new dual gpu 6800U prototype from asus at cebit that supported sli, so you could in theory run 6800U quadsli back then... i have no idea if that ever worked though...

that was a massive card.. it was higher than any dual gpu card to date, had a tripple slot heatsink and it used a single pwm for both gpus iirc, quite an interesting design...
What, like the one linked in the OP :p:

http://www.xbitlabs.com/images/news/2005-03/asus_01.jpg
http://www.xbitlabs.com/news/video/display/20050309164235.html

:rofl::rofl::rofl:

DarthBeavis
04-24-2010, 03:21 PM
That post is from 5 years ago :p:

at least I try to help people out around here. if that is fail then I am all about fail.:shrug:

DarthBeavis
04-24-2010, 03:21 PM
lol need quad psu support? call dangerden.. need anything call debbi.. it is all too easy to get custom jobs done with DD :clap:

:up:

Aaronage
04-24-2010, 03:59 PM
Huh, I'm not a big forum contributor/reader, but in my brief experience with them I've found people can be very impatient and stressy over silly things like this.

So what if an old thread got resurrected, no harm has come from it. It was quite interesting to read!

saaya
04-24-2010, 07:52 PM
What, like the one linked in the OP :p:

http://www.xbitlabs.com/images/news/2005-03/asus_01.jpg
http://www.xbitlabs.com/news/video/display/20050309164235.html

:rofl::rofl::rofl:yeah, thats it... its almost frickin double hight :D

[XC] gomeler
04-24-2010, 08:07 PM
epic fail