Me thinks folks at VR-Zone really need to learn how to use a camera...
:P
Me thinks folks at VR-Zone really need to learn how to use a camera...
:P
You were not supposed to see this.
I think they did it on purpose to highlight the PCI-E pins
I think it was intentional to show the 6+8 pin requirement too, they have other pics that show the blurred area more clearly.
Someone posted this at B3d... looks like its from PCInlife
http://bbs.chiphell.com/attachments/...HFvBdoLGz9.jpg
graphs with no photos are fake IMO, but those #'s are real close![]()
I'm so tired of meaningless 3DMarks...
![]()
You were not supposed to see this.
Coding 24/7... Limited forums/PMs time.
-Justice isn't blind, Justice is ashamed.
Many thanks to: Sue Wu, Yiwen Lin, Steven Kuo, Crystal Chen, Vivian Lien, Joe Chan, Sascha Krohn, Joe James, Dan Snyder, Amy Deng, Jack Peterson, Hank Peng, Mafalda Cogliani, Olivia Lee, Marta Piccoli, Mike Clements, Alex Ruedinger, Oliver Baltuch, Korinna Dieck, Steffen Eisentein, Francois Piednoel, Tanja Markovic, Cyril Pelupessy (R.I.P.), Juan J. Guerrero
this explains nothingIts not redundant, if it wasn't mirrored you'd have tons of more microstuttering than we already do.
why and how is information mirrored in each frame buffer?
i think there are problems combining the input from more than one gpu into the single screen output signal, and the execution is still inherently flawed.
and if more people understood the real problems, then the less crappy outcomes with pi$$poor drivers would result.
mirroring would be good if i was viewing a dynamic inkblot
is the info mirrored for synchronisation? other reasons?
more information please.
2 gpu's on one pcb - has an onboard mem controller i assume?, but it is still possible to have a shared memory pool as long as the gpus have an external memory controller. Problem with that is then you're running into possible latency issues. See why this is so difficult to fix something like this?
they'll just bung in more memory and hope that the buffers dont max out , i suppose.
Last edited by adamsleath; 06-07-2008 at 05:23 PM.
i7 3610QM 1.2-3.2GHz
go to the 4xxx series thread, people have been ranting about the problems of multi gpu for the past ~15 pages, I'm sure you can figure out what you need
and 2 gpus on 1 pcb means exactly that, 2 gpus on 1 pcb, you'd need a gpu designed to have an external memory controller (or at least a disabled onboard mem controller and external controller) for that to happen. Nothing real special about the 3870x2, and the 4870x2 really only has a superior bridge chip. However that superior bridge chip may simply be what ati needed to fix a lot of the 3870x2's problems
sync problems remain unsolved either way.
call it ranting if you will
but time will tell whether a "superior" bridge chip solves anything.
any 4870X2's with 2 x 1GB memory?
i hope it works
now im thinking 2 X 4870 1gb cf vs gtx 280 :/![]()
Last edited by adamsleath; 06-07-2008 at 05:50 PM.
i7 3610QM 1.2-3.2GHz
Not true, see if you have a shared memory pool, you don't have to mirror the frame buffer anymore, and if your drivers work right, then you can align the frames properly (as in even spacing between frames, that's the big issue behind the micro-stuttering, due to many reasons, the frames aren't lined up/rendered at the same rate) and thus micro-stuttering is gone. This new bridge chip is supposed to offer 160 GB/s, so that may be enough to keep an mcm happy or at least eliminate/greatly reduce the latency between the gpus and memory pools
as for 4870x2 2gb, that's up to the AIBs, the r600 had 1gb versions (even a 2gb workstation edition), but AFAIK the 3870 only had 512mb versions, so once again, that's up to the third party manufacturers. But regardless if the new bridge chip works as planned, you're better off buying a 4870x2 over 2x 4870 for both price and performance reasons
-
"Language cuts the grooves in which our thoughts must move" | Frank Herbert, The Santaroga Barrier
2600K | GTX 580 SLI | Asus MIV Gene-Z | 16GB @ 1600 | Silverstone Strider 1200W Gold | Crucial C300 64 | Crucial M4 64 | Intel X25-M 160 G2 | OCZ Vertex 60 | Hitachi 2TB | WD 320
Wow that really sucks if true.
Only 60% faster than 8800 Ultra at Extreme? How is that even possible given the specs?
The number 48xx appears to be true from the rumors we have heard, but I never saw it compared to the Ultra so I didn't realize how crappy of a score it was.
I'm slowly becoming less and less excited about these cards![]()
2600k @ 5ghz / z68 Pro / 8gb Ripjaws X / GTX 580 SLi
2x Inferno raid 0 / WD 1TB Black / Thermaltake 1200w
Dell 3008 WFP / Dual Loop WC MM UFO
-
"Language cuts the grooves in which our thoughts must move" | Frank Herbert, The Santaroga Barrier
2600K | GTX 580 SLI | Asus MIV Gene-Z | 16GB @ 1600 | Silverstone Strider 1200W Gold | Crucial C300 64 | Crucial M4 64 | Intel X25-M 160 G2 | OCZ Vertex 60 | Hitachi 2TB | WD 320
but will 4870X2 run out of memory?But regardless if the new bridge chip works as planned, you're better off buying a 4870x2 over 2x 4870 for both price and performance reasonsis it shared memory or 2x with bridge.?
i7 3610QM 1.2-3.2GHz
are we talking nvidia or ati here cmon ppl, so GAR will I be able to play crysis with 16Q Anti aliasing and 16 Anisiotropic filtering on 2560 x 1600 res with GTX 280, if not I wouldnt call it a monster :P! come to think of it I have never played a game with those settings :P
Bookmarks