Page 154 of 167 FirstFirst ... 54104144151152153154155156157164 ... LastLast
Results 3,826 to 3,850 of 4151

Thread: ATI Radeon HD 4000 Series discussion

  1. #3826
    Xtreme Addict
    Join Date
    Apr 2006
    Location
    City of Lights, The Netherlands
    Posts
    2,381
    I'm not sure to what extend the latency will increase with trace distance, I do know that a bridge chip does increase latency quite a bit though so that does not work for core to core communication. The biggest problem for MC on a GPU would be cooling, the GTX280 is pretty reaching the maximum power draw that can be cooled on a single spot. Having 2 GPUs separated by a little distance will make cooling quite a bit easier as the power output per unit area can be lowered this way.
    The amount of traces that connect the 2 chips will only play a role when that amount is bigger than the amount of traces needed for the memory bus. So let's say that we have 2 chips with each a 256-bit memory bus and this bus requires around 500 pins (not counting video output, PCIe CFX sideport, 'power' pins and more) then that MCM design will only become more efficient when that GPU to GPU connection requires more than those 500 pins, which to me seems unlikely. That's because that MCM package would have more than 1000 pins terminating from it, while those 2 chips done separately will more than 500 + the pins needed for the interconnect terminating from it. Latency will probably increase when doing thing the non-MCM way, but I don't think that will play to big of a role as GPUs tend to be pretty well at latency hiding and the memory latency also is not to big of a problem.
    All this is of course not yet applicable to the upcoming R700, as R700 will probably not be any miracle yet, but ATI's future R800 might be more like this.

    EDIT: I can't find anything on Cyberlink's website on GPU accelerated video transcoding, so not sure whether it will support NVIDIA cards. Has anyone else got any idea? I found some more info over here: http://www.hardware.info/nl-NL/artic...HD_4850_Test/6
    It's dutch though so you may have to grab yourself a translator. There they just said that AMD cards will get GPU accelerated video transcoding and they made is sound like NVIDIA will not get it in Cyberlink's application. Adobe Premier will also get a plug-in to accelerate certain computations and of course Havok Physics will get a boost in the future with AMD cards.
    So it seems like ATI's cards already have more uses in the GPGPU front than NVIDIA cards, it's just that NVIDIA is making more noise about their CUDA program.
    Last edited by Helmore; 06-25-2008 at 03:44 PM.
    "When in doubt, C-4!" -- Jamie Hyneman

    Silverstone TJ-09 Case | Seasonic X-750 PSU | Intel Core i5 750 CPU | ASUS P7P55D PRO Mobo | OCZ 4GB DDR3 RAM | ATI Radeon 5850 GPU | Intel X-25M 80GB SSD | WD 2TB HDD | Windows 7 x64 | NEC EA23WMi 23" Monitor |Auzentech X-Fi Forte Soundcard | Creative T3 2.1 Speakers | AudioTechnica AD900 Headphone |

  2. #3827
    Registered User
    Join Date
    Jun 2008
    Posts
    46
    Well, if you think of it though... Any multigpu tech would seriously require two sets of memory bus traces is the standard, plus a gpu interconnect set... Whereas MCM designed chips could ea utilize half the memory traces and connect to it's banks of vram. In this case, the memory 'across' the other die would be at most 1 extra hop away, or if constructed in such a way that memory controllers and memory are peers on a ring type bus, and internally on the die they use the newer point to point hub interface; then we would still be able to utilize half the traces, and yet have both dies with equal access to all memory available (slight latency increase for physical distance of traces for 'far' memory. Notice how in many designs the memory is positioned so that physical distance is pretty close to equal? This matters for signalling and what not I believe. (not so much latency hiding, as it is keeping things in synch.)

    I understand your cooling point, however look at the emergence of water in the high performance segment as is... One thing that is missed in many discussions of this type is the increased heat density produced by smaller and smaller dies. The downside of say 45nm vs 65nm is the size of the die for heat dissipation. As we shrink cores and dump more transistors into them, I don't think we'll be able to avoid the increased heat produced : die size ratio. Thus more exotic cooling solutions may be required for 32nm individual cores. Also consider that in an mcm package, we are still dealing with a doubling of surface area for a doubling of heat produced. That this heat is localized to a smaller area may not matter all that much in the grand scheme of things. It may call for more efficient heat spreaders, or simply a larger substrate to mount the dies too. It may not require two full separate packages at all.

  3. #3828
    Xtreme Addict
    Join Date
    Apr 2006
    Location
    City of Lights, The Netherlands
    Posts
    2,381
    With GDDR5 the distance to memory does not have to be equal on all chips anymore, that's because the chips do a little training on start up to see what the best way to work is and this way they can adapt to different circumstances. It's also useful for overclocking by the way, they chips will set latencies and other values on boot up and stuff like that and this will increase the overclocking capability these chips have by design. One more thing about GDDR5, the lead designer of GDDR5 is on AMD's employee list .
    As for utilizing half the memory bus for communication, well I don't think that's a very elegant approach as you are trying to make 1 chip for both midrange and the enthusiast market and you will need the bandwidth in the enthusiast market. Then one more thing, I think you can come up with a higher data rate per pin connection if you make a more 'proprietary' connection and besides, I don't think you need a connection that is at the full speed of the memory system for each chip, you won't be using that connection for doing AA resolve and AF and stuff like that. It's only meant to share data that needs to be shared, like state changes, textures and stuff like that and you probably only need 1/3 to 1/2 of the full (per chip) memory bandwidth to do that.
    Last edited by Helmore; 06-25-2008 at 04:03 PM.
    "When in doubt, C-4!" -- Jamie Hyneman

    Silverstone TJ-09 Case | Seasonic X-750 PSU | Intel Core i5 750 CPU | ASUS P7P55D PRO Mobo | OCZ 4GB DDR3 RAM | ATI Radeon 5850 GPU | Intel X-25M 80GB SSD | WD 2TB HDD | Windows 7 x64 | NEC EA23WMi 23" Monitor |Auzentech X-Fi Forte Soundcard | Creative T3 2.1 Speakers | AudioTechnica AD900 Headphone |

  4. #3829
    Xtreme Guru
    Join Date
    May 2007
    Location
    Ace Deuce, Michigan
    Posts
    3,955
    Thing is, considering the bridge chip provides 160 GB/s of bandwidth, I can't see any other reason than the r700 is an mcm for that high bandwidth of interconnection, it would all go to waste if it wasn't (like putting gddr5 on a 512bit memory controller, there's just no need/use for it yet)
    Quote Originally Posted by Hans de Vries View Post

    JF-AMD posting: IPC increases!!!!!!! How many times did I tell you!!!

    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    .....}
    until (interrupt by Movieman)


    Regards, Hans

  5. #3830
    Xtreme Addict
    Join Date
    Apr 2006
    Location
    City of Lights, The Netherlands
    Posts
    2,381
    "considering the bridge chip provides 160 GB/s of bandwidth" Where did you get that from, because then you have more information than I do on R700 (not entirely impossible, I'm not some kind of AMD employee or something). That information is not available publicly on the internet, that's for sure. Oh and R700 probably is not an MCM design, that's pretty clear already from the leaked pics. We don't even know whether there will be a bridge chip, although there probably will be one.
    Last edited by Helmore; 06-25-2008 at 04:09 PM.
    "When in doubt, C-4!" -- Jamie Hyneman

    Silverstone TJ-09 Case | Seasonic X-750 PSU | Intel Core i5 750 CPU | ASUS P7P55D PRO Mobo | OCZ 4GB DDR3 RAM | ATI Radeon 5850 GPU | Intel X-25M 80GB SSD | WD 2TB HDD | Windows 7 x64 | NEC EA23WMi 23" Monitor |Auzentech X-Fi Forte Soundcard | Creative T3 2.1 Speakers | AudioTechnica AD900 Headphone |

  6. #3831
    Registered User
    Join Date
    Jun 2008
    Posts
    46
    Quote Originally Posted by Helmore View Post
    With GDDR5 the distance to memory does not have to be equal on all chips anymore, that's because the chips do a little training on start up to see what the best way to work is and this way they can adapt to different circumstances. It's also useful for overclocking by the way, they chips will set latencies and other values on boot up and stuff like that and this will increase the overclocking capability these chips have by design. One more thing about GDDR5, the lead designer of GDDR5 is on AMD's employee list .
    As for utilizing half the memory bus for communication, well I don't think that's a very elegant approach as you are trying to make 1 chip for both midrange and the enthusiast market and you will need the bandwidth in the enthusiast market. Then one more thing, I think you can come up with a higher data rate per pin connection if you make a more 'proprietary' connection and besides, I don't think you need a connection that is at the full speed of the memory system for each chip, you won't be using that connection for doing AA resolve and AF and stuff like that. It's only meant to share data that needs to be shared, like state changes, textures and stuff like that and you probably only need 1/3 to 1/2 of the full (per chip) memory bandwidth to do that.

    Ahh but you wouldn't be.. it's not set in stone how much is used for what... The communication happens on the internal bus, this is shared for inter die, and memory traffic. So as inter core comms vary, I guess one down side is that it would reduce the memory bandwidth available slightly.. However the bus to memory modules(external) doesn't have to be this full size. I am simply envisioning a way to build dies from the ground up for mcm. Low end, would have one die, thus half memory bw, higher end has 2 or 4, thus double or quadruple.. Depending upon what is required. The idea is to have the memory controllers for ea batch of chips ( x per die) as peers on a larger bus along with compute/setup parts of die. In this fashion, memory bandwidth and inter die comms is on one standard bus. which if designed properly could be expanded x^2. (of course some finite limit.. but lets think in terms of 1 to 4 dies) Or, it could be done with something more akin to HT links, as in 2 or 3 per die, and then in reality adjusting for latency, you'd have ability for mcm for cheap midrange (4850 - 4870) Dual mcm (4870 x2) and single die for low range (46xx) Additionally think of dropping one of these onto a chipset package... Gotta think platform synergies too. (if at all possible)

  7. #3832
    Xtreme Addict
    Join Date
    Apr 2006
    Location
    City of Lights, The Netherlands
    Posts
    2,381
    Ah ok, then I didn't really get what you meant in your previous post.
    I have also been thinking about a reasonably similar set-up for some time now, but that apparently does not seem to be the case for R700 and, as things stand now, R800. That approach would be like that each chip has a "bus stop" so to speak, these chips would then communicate with each other through a high speed interconnect, so 2 chips would have 2 of these "stops" connected and 4 chips would make for 4 of these "stops" connected. This 4 chip approach would be like opening up the Ring bus memory controller that was used on R600, although the interconnect would have to be more efficient trace amount wise, and then each chip would be one of those 4 stops on the ring bus that used to be the memory controllers on R600. It would not really be a ring bus anymore as this would not work for your 2 chip part, but it will make for easier scaling. Each of these chips would have, or rather "be", 1 64-bit or 128-bit memory controller and still be a fully functional chip when used on it's own. One chip of about 100 mm^2 seems ideal to me....
    That would be ideal, or close enough for multi-die scaling, but this won't show up for quite a while and we won't be seeing something akin to this until R900 or something.
    "When in doubt, C-4!" -- Jamie Hyneman

    Silverstone TJ-09 Case | Seasonic X-750 PSU | Intel Core i5 750 CPU | ASUS P7P55D PRO Mobo | OCZ 4GB DDR3 RAM | ATI Radeon 5850 GPU | Intel X-25M 80GB SSD | WD 2TB HDD | Windows 7 x64 | NEC EA23WMi 23" Monitor |Auzentech X-Fi Forte Soundcard | Creative T3 2.1 Speakers | AudioTechnica AD900 Headphone |

  8. #3833
    Xtreme Addict
    Join Date
    Apr 2008
    Location
    Birmingham AL.
    Posts
    1,079
    I have been told by a reliable source just an hour ago that R700 has a PCIe 2.0 PLX chip. He has been reliable in the past and I have begged him for this info the second he had it. apparently the assemblie lines have fired up...

    unfortunaly he has no tech. info other then what can be visably seen.
    Particle's First Rule of Online Technical Discussion:
    As a thread about any computer related subject has its length approach infinity, the likelihood and inevitability of a poorly constructed AMD vs. Intel fight also exponentially increases.

    Rule 1A:
    Likewise, the frequency of a car pseudoanalogy to explain a technical concept increases with thread length. This will make many people chuckle, as computer people are rarely knowledgeable about vehicular mechanics.

    Rule 2:
    When confronted with a post that is contrary to what a poster likes, believes, or most often wants to be correct, the poster will pick out only minor details that are largely irrelevant in an attempt to shut out the conflicting idea. The core of the post will be left alone since it isn't easy to contradict what the person is actually saying.

    Rule 2A:
    When a poster cannot properly refute a post they do not like (as described above), the poster will most likely invent fictitious counter-points and/or begin to attack the other's credibility in feeble ways that are dramatic but irrelevant. Do not underestimate this tactic, as in the online world this will sway many observers. Do not forget: Correctness is decided only by what is said last, the most loudly, or with greatest repetition.

    Remember: When debating online, everyone else is ALWAYS wrong if they do not agree with you!

  9. #3834
    Registered User
    Join Date
    Jun 2008
    Posts
    46
    @Helmore:

    Actually, If you think of it.. with both Intel and AMD having IMCs... The next move would be onto CPU module. Never mind just the Marchitecture speak, but think right now on an AMD platform, you have Memory ->CPU -> Chipset (PCI Express arbiter) -> VC... With CSI, same thing. The move to the CPU will be made to drop the extra steps out of System ram to video ram. Now honestly, I couldn't quantify the additional penalties for having to route memory requests to cpu IMC and then passing data back through CPU... and In reality it is probably less so then utilizing say PCI Express slots off the SB (think NB <-> SB config), but i think we'll be seeing these GPGPU cores on CPU sooner rather then later. I mean think of the benefits in terms of CPU to GPGPU comm, and memory to both... Now don't get me wrong, it won't happen over night. But it will happen, and will cause a massive shift in the industry. And then what are we looking at? An SGI Shared memory infrastructure? Or tiered system memory? Embedded DRAM a la Bit Boys (shared CPU/GPU L3 Cache) and very fast DDR3 or DDR 5 system memory modules for access by both? I mean in a nutshell this is Fusion. Think of the tremendous redesign in such a chip... You could probably shave the CPU component down some, and leverage the strengths of GPGPU transistors to take on jobs that the CPU's FPU currently handles. For me this is the real game at hand, and a lot of the tech(s) being developed now are stepping stones to something more like a CPU / GPGPU design. Once you start to play with that sort of idea; the magnitude of design shift and paradigm rethink is tremendous... yet the benefits could be equally great. Now chips like the Cell start to become somewhat more interesting if you consider that in 5 years time the same sorts of ideas will be played out in PC land.

    (It also puts AMD/ATI's Vector SP design into a different light, AMD is banking on merging the two in the near future. So does it make more sense for them to have gone with Nvidias very scalar method of processing? Or AMD's more vector oriented SPs? I honestly don't know, but I do wonder if they are thinking in terms of what will make a faster unified CPU/GPU. To be able to harness half the power of rv770 as an extended FPU would be incredible. I think part of AMDs teething problems, are that they can't afford to just R&D for its own sake. They can't afford not to make a product off of their stepping stones. So they learn, and drop a product to us; moving towards a goal in the 5 year time frame (maybe longer who knows). Perhaps their design decisions on discrete 'now' are being informed by what they see fusion as being later. Again, Multi GPU and multi die communication then becomes more important because it may be a requirement for their fusion plans. As well, the same might be said for vector simd SP approach vs scalar sp approach. Hard to say. But exciting none the less.

    Intel's Nehalem has 3 x DDR 3 channels? Interesting, as that surely is laying the groundwork for adding Larabee later on... consider the die size comparisons, and compare GT280 to Nehalem... Lots of space left to add video, non? Further... intel's 45nm node is far smaller then that, and far better 'tweaked'... If Nvidia using 3rd party libraries can lay 1.4 billion transistors at 65nm, intel can do far more on in house 45nm. However, i bet they would do it MCM, so that the Frequency of Nehalem is not held back by the transistor rich graphics die. Also, when you have 14+ fabs, why not break production down into multiple dies? Allows for Discrete and embedded video. (side note: MCM allows for completely separate power planes w/o trying to do it on 1 die, I believe this hurt Phenom quite a bit)

  10. #3835
    Xtreme Member
    Join Date
    Dec 2005
    Location
    CA
    Posts
    172
    Sapphire 4870 in stock at newegg

    http://www.newegg.com/Product/Produc...82E16814102748

    I just ordered mine
    Intel D975XBX2
    X6800 ES @3.6 Ghz
    4X1024MB OCZ DDR2 PC2-6400 Gold Dual Channel 4-4-4-12
    2x500GB Western Digial SATA II HDD-Raid 0
    Sapphire Radeon HD 4870 512MB
    Corsair HX620W
    Water loop: Storm,D5,Silverprop Cyclone FusionHL, Coolingworks 32T
    My Stacker

  11. #3836
    Xtreme Member
    Join Date
    Oct 2007
    Posts
    407
    Well so much for the 4870x2 not using CF rumor. Now we have a rumor to the contrary. I will be very disappointed if AMD releases the R700 with basically the same driver and microstuttering issues as current multi-GPU solutions. I may hold onto my GTX280 if that's the case. Microstuttering reduces the value of multi-GPU even when a game has proper driver support. At the very least I would like to see SFR and tiling support in future drivers so that when micro-stuttering becomes noticeable in certain games we could just switch to SFR or "super-tiling" .
    Last edited by gojirasan; 06-25-2008 at 06:12 PM.

  12. #3837
    Registered User
    Join Date
    Jun 2008
    Posts
    46
    Tiling would rock... The performance to transistor ratio that the Kyro II got was simply incredible... For those games that supported it at all. LoL. But seriously, tiling would rock for crossfire, and Quad crossfire to boot. However, with AMDs superior Geometry setup... Wouldn't even a single card be able to act on the geometry setup and then pass it around to the cards for tiling? (yeah I am completely uninformed about what holds tiling back, so this may be a stupid question)


    Not to delve into nastyness, but has anyone reminded an nvidia fanboy lately that they should be thanking 3DFX for SLI? or ... oh... roughly 1/2 the IP currently in use on nvidia cards? :p... sorry i guess I shouldn't stir the pot.


    3DFX I <3 you! Miss you long time. (Ah my 2 Voodoo 2s in SLI)...

  13. #3838
    Registered User
    Join Date
    Jun 2008
    Posts
    46
    Quote Originally Posted by gojirasan View Post
    Well so much for the 4870x2 not using CF rumor. Now we have a rumor to the contrary. I will be very disappointed if AMD releases the R700 with basically the same driver and microstuttering issues as current multi-GPU solutions. I may hold onto my GTX280 if that's the case. Microstuttering reduces the value of multi-GPU even when a game has proper driver support. At the very least I would like to see SFR and tiling support in future drivers so that when micro-stuttering becomes noticeable in certain games we could just switch to SFR or "super-tiling" .
    Actually... it was never a question of whether or not it was going to use crossfire. Ofcourse it would. the question was as to whether or not they had improved it sufficiently to do away with the all the issues the 3870x2 had. Namely passing all data through the bridge chip. If the Crossfire side port gives extra comm b/w between chips and bridge chip is more for incoming data; then we still could be looking at something completely different.

    I mean really? How else did you expect the data from system through PCI Express bus to get to the two chips? Through one of the GPUs? Magic? A Bridge chip makes sense for some comms as it can essentially burst the data to both GPUs with no extra overhead imposed by transfer to second GPU.

    Once chips have data, the processing and intercommunication via either CF side port, or bridge, or both can begin... I assure you, the inclusion of transistors for CF side port was not simply done for s*its and giggles...

    I honestly think that the level of improvement we've seen so far bodes well for what the newer X2 layout will bring to the table. As such, pls... Wait and see, the dire CF predictions have no merit whether it has a bridge chip or not. Really. It's ok to have a bridge chip!
    Last edited by darkskypoet; 06-25-2008 at 06:34 PM.

  14. #3839
    Xtreme Addict
    Join Date
    Apr 2008
    Location
    Birmingham AL.
    Posts
    1,079
    PEX chip on r700 is allot smaller than r680. also the entire board is black but that may be just that manufacturer.
    Particle's First Rule of Online Technical Discussion:
    As a thread about any computer related subject has its length approach infinity, the likelihood and inevitability of a poorly constructed AMD vs. Intel fight also exponentially increases.

    Rule 1A:
    Likewise, the frequency of a car pseudoanalogy to explain a technical concept increases with thread length. This will make many people chuckle, as computer people are rarely knowledgeable about vehicular mechanics.

    Rule 2:
    When confronted with a post that is contrary to what a poster likes, believes, or most often wants to be correct, the poster will pick out only minor details that are largely irrelevant in an attempt to shut out the conflicting idea. The core of the post will be left alone since it isn't easy to contradict what the person is actually saying.

    Rule 2A:
    When a poster cannot properly refute a post they do not like (as described above), the poster will most likely invent fictitious counter-points and/or begin to attack the other's credibility in feeble ways that are dramatic but irrelevant. Do not underestimate this tactic, as in the online world this will sway many observers. Do not forget: Correctness is decided only by what is said last, the most loudly, or with greatest repetition.

    Remember: When debating online, everyone else is ALWAYS wrong if they do not agree with you!

  15. #3840
    Xtreme Member
    Join Date
    Oct 2007
    Posts
    407
    Quote Originally Posted by darkskypoet
    I mean really? How else did you expect the data from system through PCI Express bus to get to the two chips?
    I was hoping for something along the lines of how quad core CPUs do inter-core communication. Although in that case all the cores are on the same die. But I was still hoping for something along those lines. I don't know much about CPU/GPU architecture so it is difficult for me to speculate realistically. Is it really theoretically impossible to have seamless multi-GPU communication? Also why would the GPUs on the 4870x2 need to use the motherboard-based PCIe bus? The two chips are right next to each other on the same PCB.

  16. #3841
    Registered User
    Join Date
    Jun 2008
    Posts
    46
    Quote Originally Posted by gojirasan View Post
    I was hoping for something along the lines of how quad core CPUs do inter-core communication. Although in that case all the cores are on the same die. But I was still hoping for something along those lines. I don't know much about CPU/GPU architecture so it is difficult for me to speculate realistically. Is it really theoretically impossible to have seamless multi-GPU communication? Also why would the GPUs on the 4870x2 need to use the motherboard-based PCIe bus? The two chips are right next to each other on the same PCB.
    Mainly because they get every single last bit of information they will ever process in through the PCI Express bus. Inter die communication aside, they need a way to efficiently talk to the rest of the system. that's why a bridge chip is necessary. Only other option is to daisy chain the chips so that one acts as arbiter on the PCIe bus, then forwards info.. but that would be a bottle neck, and would waste precious inter die B/W when there is already transistors on the die to handle incoming and outgoing non local (to the card) xfer of info.. PCI express.

    Thus... regardless of inter die comms... two chips, equals bridge chip for system requests from / to cpu, and system memory, and stuff
    Last edited by darkskypoet; 06-25-2008 at 06:58 PM.

  17. #3842
    Xtreme Addict
    Join Date
    May 2007
    Location
    'Zona
    Posts
    2,346
    Quote Originally Posted by G0ldBr1ck View Post
    PEX chip on r700 is allot smaller than r680.
    Quote Originally Posted by G0ldBr1ck View Post
    I have been told by a reliable source just an hour ago that R700 has a PCIe 2.0 PLX chip.
    This PLX chip.

  18. #3843
    Xtreme Addict
    Join Date
    Apr 2008
    Location
    Birmingham AL.
    Posts
    1,079
    not sure trying to get more info. I just gotta wait for another email.
    Particle's First Rule of Online Technical Discussion:
    As a thread about any computer related subject has its length approach infinity, the likelihood and inevitability of a poorly constructed AMD vs. Intel fight also exponentially increases.

    Rule 1A:
    Likewise, the frequency of a car pseudoanalogy to explain a technical concept increases with thread length. This will make many people chuckle, as computer people are rarely knowledgeable about vehicular mechanics.

    Rule 2:
    When confronted with a post that is contrary to what a poster likes, believes, or most often wants to be correct, the poster will pick out only minor details that are largely irrelevant in an attempt to shut out the conflicting idea. The core of the post will be left alone since it isn't easy to contradict what the person is actually saying.

    Rule 2A:
    When a poster cannot properly refute a post they do not like (as described above), the poster will most likely invent fictitious counter-points and/or begin to attack the other's credibility in feeble ways that are dramatic but irrelevant. Do not underestimate this tactic, as in the online world this will sway many observers. Do not forget: Correctness is decided only by what is said last, the most loudly, or with greatest repetition.

    Remember: When debating online, everyone else is ALWAYS wrong if they do not agree with you!

  19. #3844
    I am Xtreme
    Join Date
    Jul 2007
    Location
    The Sacred birth place of Watercooling
    Posts
    4,689
    Is there a release date on the 4870x2 yet?
    Quote Originally Posted by skinnee View Post
    No, I think he had a date tonight...

    He and his EK Supreme are out for a night on the town!

  20. #3845
    Xtreme Enthusiast
    Join Date
    Sep 2006
    Posts
    881
    If 1 4870 costs $300, the 4870x2 will probably in the $500-600 range, not worth it over GTX280 imo (if it's just internal Crossfire like the 3870x2).

  21. #3846
    Xtreme Addict
    Join Date
    Apr 2008
    Location
    Birmingham AL.
    Posts
    1,079
    even though r700 uses a PLX switch there are still allot of improvments made to the design. R700 will function much better then the r680's design. As far as price goes I would speculate that after r700 is out prices will fall in line with where r670/r680 were wich would be great.
    Last edited by G0ldBr1ck; 06-25-2008 at 08:41 PM.
    Particle's First Rule of Online Technical Discussion:
    As a thread about any computer related subject has its length approach infinity, the likelihood and inevitability of a poorly constructed AMD vs. Intel fight also exponentially increases.

    Rule 1A:
    Likewise, the frequency of a car pseudoanalogy to explain a technical concept increases with thread length. This will make many people chuckle, as computer people are rarely knowledgeable about vehicular mechanics.

    Rule 2:
    When confronted with a post that is contrary to what a poster likes, believes, or most often wants to be correct, the poster will pick out only minor details that are largely irrelevant in an attempt to shut out the conflicting idea. The core of the post will be left alone since it isn't easy to contradict what the person is actually saying.

    Rule 2A:
    When a poster cannot properly refute a post they do not like (as described above), the poster will most likely invent fictitious counter-points and/or begin to attack the other's credibility in feeble ways that are dramatic but irrelevant. Do not underestimate this tactic, as in the online world this will sway many observers. Do not forget: Correctness is decided only by what is said last, the most loudly, or with greatest repetition.

    Remember: When debating online, everyone else is ALWAYS wrong if they do not agree with you!

  22. #3847
    Registered User
    Join Date
    Jun 2008
    Posts
    46
    Quote Originally Posted by G0ldBr1ck View Post
    even though r700 uses a PLX switch there are still allot of improvments made to the design. R700 will function much better then the r680's design. As far as price goes I would speculate that after r700 is out prices will fall in line with where r670/r680 were wich would be great.
    Seriously tho... I really think unless they were going MCM, they would have no choice but to go with a PLX switch to handle the incoming and out going data from/to the system. I think they'll have made strides in having another pathway open for quick inter gpu comms. Unless anyone else has any idea on what would replace the bridge chip for this task?

  23. #3848
    Xtreme Addict
    Join Date
    May 2007
    Posts
    2,125
    Whether or not it is CF on the card, there will still require a chip for chip to chip communication because how else would you get two chips to communicate with one another unless you go MCM?

    The question remains though whether it is straight CF on the card (and the bridge chip simulates the PCI-E connection + bridge) or whether more than just CF is being communicated to one another. A B3d post by one of the ATI team members suggested that the two GPU's caches allow for data to be stored hence allowing > 2x scaling with dual GPU CF to be possible so we might see that CrossFire sideport come into handy, since that wasn't explained during the RV770 briefing.

  24. #3849
    Registered User
    Join Date
    Jun 2008
    Posts
    46
    Theoretically, you could use the CF Sideport to communicate chip to chip... Although, i am not privy to Bandwidth and latency figures for such a setup. Considering tho if we used HT 3 for a reference point... I am pretty damn sure you could facilitate GPU to GPU comms through a link like that. However, If you have two ports to get data back and forth, then it makes sense to use both. (16 lane pcie 2.0 and cf sideport) I think with the cache snooping you allude to, and a well implemented algorithm; one could utilize the memory of both cards as more of a unified memory system as the CF sideport may make both sets of memory 'more local' then system memory. If it guesses wrong tho and passes data to the wrong gpu's memory enough, this could result in a slow down as the CF sideport, can't be as fast as direct vram access... or can it? Does anyone have specs or stats on the CF sideport in terms of bandwidth / latency / etc? And actually thinking of it... What is the max theoretical bandwidth on a 16 lane PCIE 2.0 connection? I mean if we are talking about abusing a bus; Sure a video card in normal useage doesn't utilize the full PCIE 2.0 16x bandwidth, but two talkative GPUs could

  25. #3850
    Xtreme Addict
    Join Date
    Jul 2007
    Location
    Alberta, Canada
    Posts
    1,264
    If the 4870x2 is on a black PCB, I'll be so stoked. I'd much sooner have 1 full coverage block for a dual gpu card then 2 separate blocks so I doubt I'll touch 2 4870s now. I might just order a 4850 to play around with until the 4870x2 is out though Wouldn't mind trying to get one past 750hz somehow hehe.
    Feedanator 7.0
    CASE:R5|PSU:850G2|CPU:i7 6850K|MB:x99 Ultra|RAM:8x4 2666|GPU:980TI|SSD:BPX256/Evo500|SOUND:2i4/HS8
    LCD:XB271HU|OS:Win10|INPUT:G900/K70 |HS/F:H115i

Page 154 of 167 FirstFirst ... 54104144151152153154155156157164 ... LastLast

Tags for this Thread

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •