For my part I know nothing with any certainty, but the sight of the stars makes me dream.
.
.
Nah, I think it's just a bad review. Someone posted in the 4870 thread that Anandtech was using a 'margin of error'. Whatever they did it was pretty stupid.
if ATI really believes that the future is in smaller chips working together in x2;x4 and so on ,then i guess that they really take CF damn serious since thats what multiple chips worrking together are all about,instead of one big monster chip as nVidia seems to be going,so real working CF is part of their tactics...
---
---
"Generally speaking, CMOS power consumption is the result of charging and discharging gate capacitors. The charge required to fully charge the gate grows with the voltage; charge times frequency is current. Voltage times current is power. So, as you raise the voltage, the current consumption grows linearly, and the power consumption quadratically, at a fixed frequency. Once you reach the frequency limit of the chip without raising the voltage, further frequency increases are normally proportional to voltage. In other words, once you have to start raising the voltage, power consumption tends to rise with the cube of frequency."
+++
1st
CPU - 2600K(4.4ghz)/Mobo - AsusEvo/RAM - 8GB1866mhz/Cooler - VX/Gfx - Radeon 6950/PSU - EnermaxModu87+700W
+++
2nd
TRUltra-120Xtreme /// EnermaxModu82+(625w) /// abitIP35pro/// YorkfieldQ9650-->3906mhz(1.28V) /// 640AAKS & samsung F1 1T &samsung F1640gb&F1 RAID 1T /// 4gigs of RAM-->520mhz /// radeon 4850(700mhz)-->TRHR-03 GT
++++
3rd
Windsor4200(11x246-->2706mhz-->1.52v) : Zalman9500 : M2N32-SLI Deluxe : 2GB ddr2 SuperTalent-->451mhz : seagate 7200.10 320GB :7900GT(530/700) : Tagan530w
Hit it right on the nose AAbenson, ATI has been saying this for 3 years. Even talk of multiple GPU's on same die.
Particle's First Rule of Online Technical Discussion:
As a thread about any computer related subject has its length approach infinity, the likelihood and inevitability of a poorly constructed AMD vs. Intel fight also exponentially increases.
Rule 1A:
Likewise, the frequency of a car pseudoanalogy to explain a technical concept increases with thread length. This will make many people chuckle, as computer people are rarely knowledgeable about vehicular mechanics.
Rule 2:
When confronted with a post that is contrary to what a poster likes, believes, or most often wants to be correct, the poster will pick out only minor details that are largely irrelevant in an attempt to shut out the conflicting idea. The core of the post will be left alone since it isn't easy to contradict what the person is actually saying.
Rule 2A:
When a poster cannot properly refute a post they do not like (as described above), the poster will most likely invent fictitious counter-points and/or begin to attack the other's credibility in feeble ways that are dramatic but irrelevant. Do not underestimate this tactic, as in the online world this will sway many observers. Do not forget: Correctness is decided only by what is said last, the most loudly, or with greatest repetition.
Remember: When debating online, everyone else is ALWAYS wrong if they do not agree with you!
http://www.xbitlabs.com/news/video/d...806162759.html
AMD’s Graphics Product Group Claims Multi-GPU is the Future.
ATI CrossFire – Key to AMD’s Future Success, Says Company
Category: Video
by Anton Shilov
[ 08/06/2007 | 04:29 PM ]
Technologies that can utilize the power of numerous graphics processing units (GPUs) for rendering have existed for years, but never were popular enough for the mass market. But ATI, graphics product group of Advanced Micro Devices, believes that the future of high-performance graphics sub-systems lies in the multi-GPU space. Though, the company does not explain what kind of multi-GPU is the future.
“AMD simply aren’t interested in building large monolithic GPUs any more, instead preferring to scale their offerings via multi-GPU,” Richard Huddy, developer relations chief at AMD’s graphics product group said at Develop conference, reports Beyond3D web-site.
The problem, according to Mr. Huddy, lies in the fact that graphics processors tend to be smaller and smaller, as manufacturing processes become thinner and going forward it would hardly be possible to equip high-performance, yet small, chips with wide memory busses, which means that their performance may be somewhat limited with the lack of enough memory bandwidth. As a result, ATI/AMD will focus on multi-GPU graphics sub-systems going forward, rather than on creating large monolithic graphics chips.
It is not completely clear whether AMD plans to cease developing graphics chips designed for graphics cards that cost $399 and more, but will just place a necessary number of mainstream GPUs onto a print-circuit board to get a high-performance graphics card.
Theoretically, multi-GPU solution may mean either several similar GPUs, like in the case of today’s ATI CrossFire or Nvidia SLI solutions, or several different GPUs, like in the case of Voodoo Graphics and Voodoo 2 solutions, where different chips performed different actions.
Nowadays graphics processors contain several key components, e.g., texture units, shader processors, render back ends, memory controller and so on. Theoretically, certain of such devices can function as standalone chips.
http://www.pureoverclock.com/story.php?id=2227
Rick Bergman, leader of the graphics product group at AMD, believes that multiple-GPU graphics cards are the only way forward for both AMD and Nvidia's high-end solutions...
He adds that Nvidia's GT200 will be the company's last "monolithic GPU".
We didn't want to come out with one monolithic GPU and then disable parts of it for different markets," said an AMD spokesman prior to a full disclosure of the part in a briefing in San Francisco on June 16.
The strategy makes sense for the financially troubled AMD which also has laid out conservative road maps for its computer processors. The graphics choice reduces costs and risks while maximizing returns for the company which has suffered through multiple loss-making quarters.
The decision to use a two-chip strategy for the high end was made more than two years ago, based on an analysis of yields and scalability. It was not related to AMD's recent financial woes, said Rick Bergman, general manager of AMD's graphics division.
"I predict our competitor will go down the same path for its next GPU once they see this," Bergman said. "They have made their last monolithic GPU."
"On paper the AMD approach looks good," said Jon Peddie, principal of graphics market watcher Jon Peddie Research (Tiburon, Calif.). "If it works, it will be a significant shift in how GPUs are made, but we won't know until later this year" when customers can test the new parts, he said.
This Talk has gone back 3 years or so! now you show me how its "absalute nonsence".
Particle's First Rule of Online Technical Discussion:
As a thread about any computer related subject has its length approach infinity, the likelihood and inevitability of a poorly constructed AMD vs. Intel fight also exponentially increases.
Rule 1A:
Likewise, the frequency of a car pseudoanalogy to explain a technical concept increases with thread length. This will make many people chuckle, as computer people are rarely knowledgeable about vehicular mechanics.
Rule 2:
When confronted with a post that is contrary to what a poster likes, believes, or most often wants to be correct, the poster will pick out only minor details that are largely irrelevant in an attempt to shut out the conflicting idea. The core of the post will be left alone since it isn't easy to contradict what the person is actually saying.
Rule 2A:
When a poster cannot properly refute a post they do not like (as described above), the poster will most likely invent fictitious counter-points and/or begin to attack the other's credibility in feeble ways that are dramatic but irrelevant. Do not underestimate this tactic, as in the online world this will sway many observers. Do not forget: Correctness is decided only by what is said last, the most loudly, or with greatest repetition.
Remember: When debating online, everyone else is ALWAYS wrong if they do not agree with you!
If you make a RV770X2 MCM alike, you get a big monolithic GPU.
Are we there yet?
Actually no... You don't. You get two smaller gpu cores that communicate over some sort of Bridge. HT? Or the old Ring Bus anyone? Smaller cores fit into a circular wafer better; thus higher yields / less waste. Further smaller cores can be applied accross a greater range of products (w/o having to sell cheaper derivatives w/fully functional cores (once yields are brought higher)).
Therefore; MCM does not equal monolithic gpu. By definition!
Not really. Monolithic means one piece, solid without division, unbroken.
A side by side die on a substrate avoids the problem of poor yields vs. a true monolithic core. Just ask AMD about this and their K10. Yes there are trade-offs to either approach, but for a GPU because of it's highly parallel nature, I suspect the performance trade off will not be as severe vs a CPU, if done correctly. But that is speculation on my part.
It really comes down to cost, going with a gigantic single die as Nvidia did appears to have necessitated a price point way out of range. How do they take the large die and scale it down for the mainstream markets without making a completely different die?
And actually.. on this topic... Thinking longeterm; if AMD and ATI can get their collective stuff together; we should be seeing some interesting interconnect technology starting on the X2 front and moving to the swift/fusion fronts. Integrating multiple discreet cores for GPU is akin to doing the same with CPU cores. As MCM would be nice to see in the future from AMD, it would be interesting to see what they have up their sleeve for MCMing a couple phenom dies (think Deneb w/0mb L3) and a couple rv770s. Is it going to be HT 3.0? or A type of ATI ringbus? Either would provide sufficient bandwidth for intercommunication. However, i am not sure which would be more pin count friendly. We are talking about a monster socket for things like this. If rumor is near accurate, Deneb with 0 L3 is going to be nice and small at 45nm.
Debating picking up an Accelero S1 Rev 2 for my 4850... Don't need it for stability or anything, but Overclocking and voltmods are sounding niceOnce the 700mhz cap is removed.
It's funny that certain review sites could only get 680mhz from the card, I understand testing as released... But if simply changing to a higher grade thermal paste + reapplication gets a good temp drop... I was under teh impression that 700 wasn't hard to hit. (please correct me if I am wrong)
Come on you guys! You know what I mean!
I meant you get all the disadvantages a monolithic GPU brings (heat, excessive dies-size), and loose the advantages of making one from scratch (native bridges, memory controllers etc).
Of course you don't get a monolithic GPU with an MCM, I mean, duh!![]()
But it sure is a lose-lose situation, that's what I meant![]()
Are we there yet?
It so is not Lose Lose.. OMG - Tell intel their Core 2 Quads are lose lose.. I am sure you will get a resounding laughing binge... For days at the minimum.
MCM gives you a win win if you design the chip for it. The silliest thing, is that AMD didn't go MCM with its quad core. Multiple coherent HT links... HELLO!!! This screams MCM. I mean seriously, they could've dropped 2 chartered 65nm x2 dies on a substrate and HT'd them together... Worst case would've been 4xxx or 8xxx opti cores to do it. How do you think they communicate as is in a multi socket platform? HT!! So cut the latencies, noise, etc induced by socket interface x 2 and board traces, and you have a much 'toighter' interface. However, they didn't... And well we've seen how a large Monolithic Quadcore with varying power planes scales... POORLY.
MCM allows for cheap building blocks, and a segmentation without losing much if any performance. further look at some ofthe more indepth reviews. It is the IMC and HT interface that allows Phenom to scale so well in Multi socket servers (4P and up) not the monolithic design. In fact, with Nahealm (sp?) I am sure we will see some more MCM lovin from intel, and the cost savings and scaling that go along with it. VIVE Le MCM!!!
I firmly believe it is a much better solution to design a monolithic GPU from scratch than an MCM solution. (although i think the best solution of all, is a dual GPU card like HD3870X2, but much more scalable, of course)
Intel CPUs had the advantage of better MCM because memory controller was not Integrated, and that's why AMD didn't risk doing so, I think. They preferred a native Quad-Core design.
And you are contradicting yourself: "It is the IMC and HT interface that allows Phenom to scale so well in Multi socket servers", that's why MCM is a waste of resources with GPUs, because GPUs already scale decently without being MCM, if the software is kind enough.
You won't be seeing MCM GPUs in along time, I would dare to say ever.
Are we there yet?
"When in doubt, C-4!" -- Jamie Hyneman
Silverstone TJ-09 Case | Seasonic X-750 PSU | Intel Core i5 750 CPU | ASUS P7P55D PRO Mobo | OCZ 4GB DDR3 RAM | ATI Radeon 5850 GPU | Intel X-25M 80GB SSD | WD 2TB HDD | Windows 7 x64 | NEC EA23WMi 23" Monitor |Auzentech X-Fi Forte Soundcard | Creative T3 2.1 Speakers | AudioTechnica AD900 Headphone |
Actually.. No I am not contradicting myself. Scale in terms of operating frequency on a smaller less complicated die, is clearly different then scaling in multi socketed servers.
MCM allows for smaller cores produced more cheaply, with simpler structure to operate very close to the same performance wise as a monolithic core (at same speed). MCM allows for operating at faster frequencies which allows in many cases for higher performance. However if we throw that out the window, we are still left with the advantage of producing cheaper, higher yield cores and getting same or near same performance as we would with a more expensive, lower yield monolithic solution.
FURTHER; in GPUs, we have an application where massive parallelism is already the norm, because of this MCM is perfect here as essentially adding more 'processing units' is sinfully easy, and is already supported in code. (see monolithic chips with many 'cores' doing rather well) MCM removes the bottleneck (if done right) inherent in multi-gpu setups and as such MCm will be seen, and if not nVidia, or AMD, count on Intel to do it with larabee. Why?
Because every firm that has ever produced dies in house knows that you get higher yields the simpler and smaller a core is. Some just chose not to follow it. For instance, nVidia has no massive R and D expenditure for multi die communication; both AMD and Intel do. So of course nVidia will say that monolithic is the way to go, they don't have the R&D in multi die comms. Where as Intel does MCM everyday, and AMD can do MCM if they want to; of course they are going to go the opposite route and utilize the dollars spent in that area to leverage the potential (AMD) and actualized (intel) savings int hat area.
Monolithic is easier as in a today solution for nVidia, if AMD pulls it off, and has done something akin to what they are capable of with R700... Then mulitgpu will be the best solution for AMD.
The real game is integrating CPU and GPU, much like FPU was integrated into much older CPUs. This is where AMD, and Intel, and probably nVidia will go. The cheaper and better way is via a Type of MCM, and inter die comms. The more costly and brute force method is with a monolithic die.
I agree that it's better to have a card more akin to the 3870X2 from a layout standpoint as this makes cooling a lot easier and maybe also trace routing but that depends on the interconnect between the 2 chips. It is not true though that MCM will not provide any benefit over a single monolithic die, this is because the price of a die scales exponentially with die size so one 500 mm^2 die is 4 times (might even be more) as expensive as a single 250 mm^2 die. In other words, that monolithic die will be twice the price to make compared to that MCM version.
But it will in the end be better to place those 2 chips far apart from each other, compared to MCM that is, to be able to facilitate better cooling. So Fusion will be the closest thing there will be to an MCM GPU in the future and that is not even truly an MCM GPU (because it's an MCM of a CPU+GPU).
I don't think Intel truly had the advantage in making an MCM though, that's because AMD has split up their dual channel memory controller into 2 separate single channel memory controllers on Phenom. This way they could use 1 memory controller from each CPU and link them together through (several) HT links, you could possibly use multiple links on systems that wont use them for other purposes like single socket and dual socket systems. This way each CPU would have it's own access memory while still being able to get fast access to the other memory pool on the other CPU. This faster access to the other memory pool could be (drastically) improved compared to an ordinary 2 socket system because the CPUs are close together and you can thus lower latency and increase HT link speed further. But this is all not to related to the topic here at hand....
"When in doubt, C-4!" -- Jamie Hyneman
Silverstone TJ-09 Case | Seasonic X-750 PSU | Intel Core i5 750 CPU | ASUS P7P55D PRO Mobo | OCZ 4GB DDR3 RAM | ATI Radeon 5850 GPU | Intel X-25M 80GB SSD | WD 2TB HDD | Windows 7 x64 | NEC EA23WMi 23" Monitor |Auzentech X-Fi Forte Soundcard | Creative T3 2.1 Speakers | AudioTechnica AD900 Headphone |
Ok so we saw what the card does(4870). I'm really interested in the overclocking ability. Since they give off a lot of heat at idle I wonder how much will it overclock. I believe i saw only 1 review that tried to overclock it and they were not that lucky - like 50mhz on top of stock...
i9 9900K/1080 Ti
@darkskypoet: What is your definition of MCM? Does only the way Intel does it at the moment count as MCM to you, two chips in one single package. Or would 1 card with 2 GPUs (no bridge chip) with some space in between the cores to facilitate better cooling but close enough to allow high speed/bandwidth and low latency communication count as MCM to you? That latter option would be the way to go for discrete graphics cards, Intel just does MCM to get quadcores in a 'cheap' way. They are both in essence MCM, although we do not usually define that latter option as MCM over here though......
EDIT:
@TurboDiv: Their idle power draw is only high at the moment because PowerPlay is not yet functioning properly. These cards are supposed to run at 160MHz. on the core and 500 MHz. on the memory when idling and they run at 500 and 750 for core and memory respectively at the moment. Those numbers where taken from a newer BIOS for these cards by someone on this forum I believe, although I'm not to sure on the memory clocks anymore. But once these cards do work properly, then power consumption when idling will probably be lower than on the 38xx cards.
Last edited by Helmore; 06-25-2008 at 02:58 PM.
"When in doubt, C-4!" -- Jamie Hyneman
Silverstone TJ-09 Case | Seasonic X-750 PSU | Intel Core i5 750 CPU | ASUS P7P55D PRO Mobo | OCZ 4GB DDR3 RAM | ATI Radeon 5850 GPU | Intel X-25M 80GB SSD | WD 2TB HDD | Windows 7 x64 | NEC EA23WMi 23" Monitor |Auzentech X-Fi Forte Soundcard | Creative T3 2.1 Speakers | AudioTechnica AD900 Headphone |
[Review] Core i7 920 & UD5 » Here!! « .....[Review] XFX GTX260 216SP Black Edition » Here!! «
[Review] ASUS HD4870X2 TOP » Here!! « .....[Review] EVGA 750i SLi FTW » Here!! «
[Review] BFG 9800GTX 512MB » Here!! « .....[Review] Geforce 9800GX2 1GB » Here!! «
[Review] EVGA GTX280 1GB GDDR3 » Here!! « .....[Review] Powercolor HD4870 512MB GDDR5 » Here!! «
Okay I said this on the other thread also but I need an answer.
What are reports of 4870 overclocking indicating? I guess without voltmod the card can hold some nice overclocks up to 800. How many people have achieved this?
And with a nice aftermarket cooler, what about voltmodding?
$120no thanks, but I have never done any video transcoding so I'm not sure what it normally costs. BTW, is this cross platform? I only know it works on ATI cards so far, what about NVIDIA cards?
@annihilat0r: I think we will have to wait about a week to be able to draw a proper conclusion about overclocking abilities.
"When in doubt, C-4!" -- Jamie Hyneman
Silverstone TJ-09 Case | Seasonic X-750 PSU | Intel Core i5 750 CPU | ASUS P7P55D PRO Mobo | OCZ 4GB DDR3 RAM | ATI Radeon 5850 GPU | Intel X-25M 80GB SSD | WD 2TB HDD | Windows 7 x64 | NEC EA23WMi 23" Monitor |Auzentech X-Fi Forte Soundcard | Creative T3 2.1 Speakers | AudioTechnica AD900 Headphone |
@Helmore: MCM is multi Chip Module If I remember correctly, and so technically two dies on same substrate. Having 2 separate gpus on the board, and having them linked is a multi gpu configuration. not MCM by definition, but could yield definitely adequate results. However; this may increase dependency on a more expensive board with many more traces then an MCM card would require. Also, consider the increase in latency for communications between the two chips. A bridge chip in this case actually sort of solves the latency issue, as it would take in the data and pass it to the two chips with roughly equal latency... (I think) however, passing data between the chips has much higher latency then in a true mcm package. Especially an MCM package that doesn't require the existence of a third chip (intel MCM required both pass data over FSB), AMD MCM would not require this with IMC, and HT links. neither would Nahealm (sp?).
Also, I figured AMD had the advantage in making MCM, as they didn't require outside module comms... Whereas Intel Did. However, Intel has mfg millions of MCM chips and so they kind of have the advantage in experience. The K8 / K10 architecture would be better for MCM then core 2 was. Because of HT and IMC, Intel now can leverage relatively similar techs with CSI, and IMC... (on a side note, nehealm(sp?) looks almost identical topology wise to AMD slides of Montreal, and other multi core AMD products.) Cept Penryn type core, and not phenom type core. lol
Bookmarks