not surprised;)
Printable View
not surprised;)
I'm not sure whether this has already been posted or not, but it seems that NordicHardware is doubting their own earlier supplied information (in a positive way though).
They are also pointing to may as the time for the first cards to surface, probably not retail though.
Damn. AMD got it right with their video (thanks to ATI that is). They gotta get their mind straight in the CPU war. I wanna be their consumer!
Looks like ATI will finally be able to get back up on its feet from the AMD takeover and the R500 fiasco (where it lost both the midrange and highend to Nvidia all in one swoop).
2 Years ago, if you told someone that ATI or Nvidia would be coming out a 480-shader card with 1GB of GDDR5/4 for $299, they'd laugh their heads off, give you the $300, and recommend you seek therapy.
Gone are the days of $1000 super rare uber-vid cards :)
sssshtt...... not so loud, they'll hear you.
but indeed, the last 1000+ card was the 7800gtx 512 if i remember correctly.
the 8800ultra was like 300 bucks cheaper. and now the 9800gx2 is 200 cheaper than the 8800 ultra.
so if we wait we can have a 10800gx2 for like 300 :up:
Dropping prices can do good to help out PC gaming (of course) :yepp: :up:
looking good ;)
http://www.xbitlabs.com/images/news/...4_2007_jpr.png
Prices are falling down again. But the discrete cards market share is to low (30%). The other 70% are IGP.
From that 30% many are low-end and mid-low-end so we can clearly see that most of the grafic cards are from that level.
Anyway the AMD 780G is a revolution in the IGP market (70% of the market).
its interesting to see they discreet market share climbed in one year 10% (from 21% to 31%).
It seems that people finally realize that igps suck.
Only 16 ROPS just like my crappy 2900XT? Performance in today's (non-UT3 engine) games will probably still be mediocre. The R700 will probably PWN in all UT3 based games and newer game engines, though. Nvidia won't be able to keep up in shader intensive game engines this time around I bet. Unfortunately for ATI, there aren't too many of those game engines currently. I think the R700 needs to drastically increase the texture fillrate, compared to the R600, to be competitive.
It depends on what a certain game needs to run well. If it needs a high pixel fill rate, then only having 16 raster operation pipelines (ROPs) will probably be a limitation. If the game requires a high texture fetch rate, then the new RV770 core with 32 TMUs (double that of R600) should provide excellent performance. The R600 and RV670 were already good cards at pure shader limited apps like 3dmark06 and Oblivion. Adding 50% more shaders and increasing the clock speed will only make things better.
The 8800GTX was such a good card because it was so well balanced, equally good at many things. I am hopeful for the RV770, but having the same number of ROPs as the X1800XT does not inspire confidence for applications that require a high pixel fill rate.
Another downfall of the same # of ROPs is the AA /AF performance as well as the post processing capibilities. This was identified as a weakness in the R600 architecture. Unless ATI has figured out a way to do more with 16 ROPs than they did before, this will probably be a weakness again.
Wow...
First games don't "need" a higher pixel fillrate, that is dependent on the resolution, obviously.
I also don't know if I would call G80 balanced...
Also this whole, R600's downfall is the ROPs and broken AA is BS. There seems to be a larger hit from AF, which is why AMD/ATi is doubling the texture units, than there is from AA.
16 ROPs will not hurt the RV770 at all, they simply need to do a little tweaking, like doubling the Z per clock from 2x to 4x, and things should be fine.
ROFL. Yes, they do. At a given resolution and settings, a card with better fill rate will perform better when the API demands that.
It certainlly is. Name an area in which it is unbalanced. There are none. I challenge you to explain to everyone how the G80 architecture is not balanced. Compare this to R600 which is good as some things and poor at others.
ROPS do a lot more than just AA. Read what I wrote.
Right... maybe up to a certain point but after that there are much more noticeable bottlenecks than pixel pushing...
Continuous floating point performance?
Non-hardware reliant AA?
There are a few more but that would just lead this topic even more astray.
I understand that, where did I say anything different?
The point I was trying to make is that the changes the RV770 is supposedly receiving will make it a formidable opponent.
One thing I have learned about AMD / ATI GPU's, never believe the hype.
Oh-noes...
Theoretical throughput is lower than peak.
You don't really want to compare G80 to R600 and say SP efficiency sucks on G80... :)
http://www.helsinki.fi/~llounent/smilies/blankstare.gifQuote:
Non-hardware reliant AA?
There is NO software AA in G80 simply because unlike ATi R6 chip, the ROPs do it 100% for G80. Besides, why should there be software AA? There's only performance to loose when offloading AA to SPs.
source: http://www.fudzilla.com/index.php?op...=6721&Itemid=1Quote:
RV770 is already in production
We learned that the RV770 GPU already went to production and there is a big chance to see R700 generation much earlier than anyone has expected. TSMC is already in pilot production and the chips are, as we reported earlier, developed in 55nm machitecture.
The way it looks now, there is a strong possibility that R700 should show its face at Computex (June 3) while the launch itself might be shortly before or after Computex. We still don’t have the final details.
If it's in production already then the first wafers are done ~2-3months from now. Then couple weeks for assembly, stockpiling and distribution...
:sonic:.........->.......................->................[ati shop]
Right... so 2/3 of the theoretical throughput is a good thing? This isn't a few dozen GFlops difference between theorectical peak and actual performance, this is a few hundred.
http://www.helsinki.fi/~llounent/smilies/blankstare.gif
There is NO software AA in G80 simply because unlike ATi R6 chip, the ROPs do it 100% for G80. Besides, why should there be software AA? There's only performance to loose when offloading AA to SPs.[/QUOTE]
Ok.... Let's talk again in a few years about this.
Oh, almost forgot about the "only performance to loose" part. Where do you see that?
2/3?
Are you referring to the proverbial "missing MUL"? (although then the number would be ½)
It's true that in some contexts nVIDIA says G80 is MADD+MUL. Technically it's true but there's a catch. The "missing MUL" isn't even supposed to handle shader instructions - it isn't even part of the shader core.
Infact, the actual shader ALUs appear to be quite efficiently exploitable as B3D found out:
...Unlike the SPs on R6 of which only upto 2/3 are utilized even in synthetic tests.Quote:
We can push almost every other instruction through the hardware at close to peak rates, with minor bubbles or inefficiencies here and there, but dual issuing that MUL is proving difficult. It turns out that the MUL isn't part of the SP ALU, rather it's serial to the interpolator/SF hardware and comes after it when executing, leaving it (currently) for attribute interpolation and perspective correction.
http://<font size="1">http://www.bey...ws/1/11</font>
This has probably been covered at some point, but nonetheless...
Are these cards just a minor evolution from the 3XXX series (IE: G80 to G92) or could these cards be a large performance boost?
I'm just getting so tired of cards coming out today being... Hardly even a worthwhile upgrade from my old 8800 GTS 640 mb.
I remember when a new generation of GPU's meant suddenly all the games that were virtually unplayable on the last generation suddenly ran at 100 FPS with full AA/AF :D
Those days need to come back :cool:
nvidia's initial offering will be even smaller than g80 to g92, but will eventually be forced to reveal "GT200" (whatever the hell it is, $10 says it doesn't work or the yeilds suck like the original r600 designs ). ATI's offering on the other hand will offer anywhere from 40-60% performance increases in general for performance, close to my guess is around a 100% increase for midrange (the 4670 should be a beast compared to the 3670, in fact I place it around 3850 performance), and a decent low end
Eitherway, I don't think we'll be seeing another g80 for a while. The way the market's been behaving dictates that, people are spending more than enough money on g92 for nvidia to invest huge money on replacing it. Not to mention pc gaming has been on a big decline simply because it costs too much to update your pc every 2 years but a consol will last you at least 3-4 and is a lot cheaper too (that and the fact that most pc games these days are crappy consol ports instead of being designed for pc hardware). But once nvidia and ati get into a performance war like intel and amd, then we'll be wowed again
Nope just crap, as is the NV 9800GTX.
First one to implement 32 ROPs and give us hires w/8xAA in Crysis wins.
Im not buying another card until it has 32 ROPs.
Ummm... hardware AA IS a hindrance to gaming with certain types of rendering.
Software AA is eventually going to be the future whether anyone likes it or not.
Because 2+1 = 3 and 2 outta 3 is two-thirds... aka NOT 1/2.
"SOME" contexts, they have stated that from the get-go as FACT.
Love to see some of these findings of how R600 SPs are only using 2/3 at max in synthetic benches...
Now if you used the words "average" and "real world performance" I might believe you.
LMAO! Yes because the number of ROPs on a GPU is all that matters...
You're right about the "2/3" in a sense that each MADD does 2 FLOPS. But that's it. For some reason you desperately cling to the idea that the MUL should somehow be counted in as well... :shrug:Quote:
Originally Posted by LordEC911
Guess that's like saying the branch unit (6th ALU) in R600 should output measureable FLOPS too. :rolleyes:
nVIDIA does not claim G80 has 128 superscalar ALUs, only 128 scalar ALUs. That means they admit each SP is simply just 1x MADD, the MUL is not counted as part of the ALU core. Technically they're correct if they say G80 has 128 MADD+MUL since, sure it does have MADD+MUL. That doesn't mean they all are used for shading. But I said this already. So I have no idea what youre trying to say with the "2/3". The real figure for G80 is likely closer to 2/2. In the end your argument that shaders in R600 are more efficient than in G80 is simply quite silly because it's a given that a scalar GPU architecture is always more efficient than a superscalar one. Superscalar GPUs always face the same problem; shader instructions can not be chopped smaller and smaller to the extent that all sub-ALUs could be used at anywhere near 100% saturation. Scalar chips simply doesn't have this problem.
Sure.Quote:
Love to see some of these findings of how R600 SPs are only using 2/3 at max in synthetic benches...
->Quote:
Originally Posted on Techreport
3DMark pixel shader 1600x1200:
R600
223,9 FPS
0,28890322580645161290322580645161 FPS/MHz
8800GTX
329,3 FPS
0,24392592592592592592592592592593 FPS/MHz
R600 pushes mere 18% more FPS/MHz, so if G80 would run at near 100% SP saturation in this test then only ~50% of the sub-ALUs in R600 would be in use.
->Quote:
Originally Posted on Techreport
3DMark shader particles 1600x1200:
HD2900
119,4 FPS
0,15406451612903225806451612903226 / MHz
8800GTX
124,6 FPS
0,097185185185185185185185185185185 / MHz
R600 pushes 58% more FPS/MHz, so if G80 would run at near 100% SP saturation in this test then only ~2/3 of the sub-ALUs in R600 would be in use.So you think R600 is sooo awesome because it can pull huge FLOPS in some FLOPS-virus benchmark prog coded just for R600, eh? But when the chip is put to real use - be it 3DMark or games - most of the ALUs sit idle because in real world shader code is not made specifically for R600 and because drivers can't produce enough instructions.Quote:
Now if you used the words "average" and "real world performance" I might believe you.
Now that this thread is so far off topic, let's try to get it back on track.
The point I was trying to make is that the only part of the R600 architecture that is lacking is the texturing power.
People seem to believe that there is an AA bug that kills performance.
This is not the case, not only was IQ increased over the R580 but so was performance. Shader based AA is also not a cause of R600's poor performance.
With RV770 been an evolutionary step up from the R600, doubling the texture units, increasing the ALU count and tweaking the ROPs a bit (even while keeping the count at 16) should be more than enough for a sub $300 product. Depending on what R700 actually turns out to be, the next quarter should be a good time for ATi until we learn what GT200/G100 is.
ROPs play more of a supporting role for a graphics card not a primary role like the shaders. They write processed pixels into the buffer, and handle some AA functions and post processing. You should buy a card that has a good balanced design and not just focus on one particular facet of it.
Exactly why the tmu count was raised and not the rop. Considering that its a superscalar design, the 32 TMUs would have been perfect for the r600 as that would have meant the same 2:1 ratio nvidia used with g80 (which seems to work perfectly). But seeing how the actual stream count has been raised from 64 to 96, I'd say I would have wanted 46 TMUs instead as g80 proved the 2:1 works while anything less offers performance issues.
Regardless, the rv770 will be here months before gt200 and it should bring in a lot of money. Besides, looking at how silent ati and amd were about the original r600 (before the gazillion changes) and k10, and how nvidia is always bragging and leaking out their highend stuff, I'd say they are facing problems with gt200
I think ATi should keep the Shader/TMU ratio at at least 3:1 as ATi also does the AA in the shaders. Oh and doesn't 480 stream processors equate to 96 shaders? I'm wondering what the amount of SIMDs will be on the RV770 though, but I'm not sure how important that is in terms of efficiency.
But AA is handled by the ROPs in other designs, not TMU, and my bad on the 72, I was thinking 48 (instead of 64) stream due to the 480 shaders and multiplied that by 50% to get 72
Eitherway, performance gains should be huge with the rv770, the extra TMUs will help open up its power a bit and the the higher amount of actual shaders should help a lot since hardly any games are coded to make use of the weird 4:1 ratio
R700 & GT200 specs speculate :
http://forums.vr-zone.com/showthread.php?t=260999
:shakes: .
dude those are wayyyy off
perhaps gt200 could have 240 shaders, but I doubt it, unless if they are going with a superscalar design similar to that of the r600, 240 is completely out of the question because nvidia 80 tmus wouldn't be enough for a suped up g80 imo (the 8800gtx needs 64 and it only has 128 shaders). Besides no way the die size is over 600mm^2, g80 was rediculous near 500mm^2 and had terrible yields compared to g92 especially the rv670, no way in hell they dare to do that. If they do, ati will simply price the r700 at $400 while gt200 is $700+ (they'll need it be like the ultra's pricing due to the extremely low yields) and even if the r700 is up to 15% slower, people will take two and xfire it any day on an intel platform over a single gt200
besides, why would nvidia go for a 512bit bus when gddr5 is right around the corner? The price of designing and implementing the large ring bus into the die would be no where near worth it, even gddr4 would be enough. Only reason I could see them wanting to use a 512bit bus would be for the 32 ROPs. But either way, I can't picture the core running at even 500mhz with those specs. You'd need stock watercooling for even 400mhz for a die larger than 600mm^2.
As for the rv770, I highly disagree with those too, too many sites have confirmed the 480 shader with 32 TMUs not to mention the use of gddr5 in both ati's and nvidia's performance+high end for this to be correct. Besides, what they have wouldn't be close to a 50% performance increase, in fact that would probably about the same performance with any AF due to extremely low tmu count and low shader count increase
Maybe you should read the comments, then you'll know that those are just some kind of made up specs.....
Which is just stupid ofcourse :p:
If Nvidia adopts GDDR3 once again, It will surely be left behind because AMD will adopt GDDR5 as fast as It can. The speed difference between (G3, G5) is too far off. The competiton is here Nvidia and GDDR5 is there to be adopted if not buried on sunshine.
Metroid.
R600 was 4:1 in terms of ALU:Tex
If RV770 really has 96 shaders then it would be 3:1
With 96 shaders it should be kept at 4SIMDs.
The rumored 160shaders was also rumored to be a 5SIMD approach.
The amount of SIMDs are important to efficiency due to the batch size.
AliG,
R600 and G80 have the same ALU:TMU -ratio - 4:1.
G80 has *only* 32 TMUs.
;)
160 SPs (3 sub-units / SP) with 32 TMUs would be really nice. Narrower SPs would certainly help increase efficiency of the ALUs - this alone could easily more than double RV770's shader core throughput compared to RV670.
check out these reviews
http://aphnetworks.com/reviews/asus_...gt_top_512mb/2
in the 8800gt, the card has 56 TMUs, half of the 112 alu
and also here it states that g92's texture power is different from g80's (can do double the work)
http://www.elitebastards.com/cms/ind...1&limitstart=1
All these Flops, shaders and co don't interest me at all, I want FPS numbers in games... both companies impressed us with high technological stuff, one went that way the red team the other way... yet in games all that marketing crap wasn't really giving conclusive facts in gaming... it's all nice on paper but does it deliver ?
I just want a decent increase in performance over the current cards if not they can put it back in the freezer... be it GT200 or RV700... I just buy the fastest card at that time
+1 Leeghoofd:up: . All we need is not (Alus, flops, aa, ab, cc, yy, zz) technological mombo jombo:mad: . We need good kickars......e vga cards. Ati or Nvidia which ever company that comes up with the bomb, i will be spending my hard earned bucks on them. Moreover i like that X2 idea in these new cards, cos i don´t think i will ever build an sli or crossfire rig. How long will the HD-4000 wafer be? I hope it will fit in a normal mid tower.:shrug:
hd 4000 wafer? What are you talking about? The wafer's size affects the yields, not how big the card will be. The die size won't really make much of a difference either since they can only get so big before they're too expensive to produce. The thing that makes the biggest difference is how complex the pcb is (for instance, that of the original r600 was extremely complex due to the 512bit memory and thus was very large) which also includes the power supply and the amount of memory on board.
Shouldn't be that much different from the 3870 to the 4870, all there is a larger die, pcb should be very similar since there aren't any big architectural changes and the memory bus is the same
As for the gt200, well no one's quite sure because there aren't any firm rumors out yet, but if the 512bit memory is true (which it may due to how powerful its supposed to be) then it could end up being quite large. The die for one will be huge and will require a very high quality power supply, I wouldn't be surprised if its longer than the 8800ultra.
And for the r700, that's a hard one to say because the positioning of the chips and xfire bridge could make a very substantial difference in length (just look at the gddr4 and gddr3 3870x2 pictures, at least an inch difference). But I'd be willing to bet around the same size of the 3870x2 gddr3 since the rv770 dies are a bit larger and its supposed to have 1gb memory per gpu
a bit more than an inch, which can make all the difference. That's exactly why I say the positioning of the of the gpus, memory and xfire bridge could mean everything on the r700 (in terms of lenght). I mean look at the weirdo 3850 x3 thingy asus is building, they managed to put gpus on both sides and thus the thing isn't 1.5' long.
Either way, home many people run into trouble with height compared to length? Unless if you are planning on putting a high-end, monstrous heat releasing, large gpu in a shuttle case (which would be a bad idea due to how little airflow the case would have), height will never be an issue. Most cases offer at least 2 inches of clearance from the top of the standard gpu anyways
http://www.fudzilla.com/index.php?op...=6784&Itemid=1
Looks like we might be seeing it shortly.:)
Bah, Fudzilla. Better source pl0x? :D
Actually I would agree that we will be seeing the r700 cards around June. The cards taped out wayyyy back in like January or something like that. Multiple sites confirmed it and ati has said they are planning for a q2 launch if I remember properly.
All that we're waiting on is gddr5 at this point, which is in the final testing stages right now. So yeah, I'd say June sounds about right
Duh.Quote:
Originally Posted by LordEC911
At the same quality.
RV770XT cooler exposed ?
http://forums.vr-zone.com/showthread.php?t=261753
" RV770XT will use GDDR5 and digital power supply, can be seen from the radiator size, GDDR5 relative GDDR3 fever higher than many. "
Well, I said late May two days ago (HERE) if that's worth anything to you.
//Andreas
that cooler looks quite short!
also its interesting from the nordichardware link that ATi are planning on releasing a GDDR3 version before the GDDR5
http://www.itreviews.co.uk/graphics/...dware/h889.jpg http://regmedia.co.uk/2007/05/14/amd_ati_2900_xt_1.jpg
Similar? No man, this is the same as 2900XT, in design and visuals. x1900 and 3870 are not... That (copper?) block with those heatpipes is a 2900XT exact copy.
The only and also main difference I'm seeing is the bottom plate, or GPU and cooler contact area. The contact area on the cooler pictured over at VR-Zone forums has a square that's supposed to make contact with the GPU in line with the rest of the cooler and a lot smaller then on the 2900 cooler. This little square is positioned diagonally on the 2900 cooler. (I hope you guys can figure out what I mean :p:)
afaik it looks like the shroud from a 2900pro? thats my xt and its not the same cooler
http://img292.imageshack.us/img292/1928/dscf0975an6.jpg
Hmm, that's possible, but you're forgetting the rv770xt IS "performance", most likely the 4850 will come out first as that's supposed to have either gddr4 or gddr3 (don't remember off the top of my head). But I highly doubt the 4870 will come out with gddr3 unless if 3rd party manufacturers decide to do that as an option same way there are 3870 gddr3 and 3850 gddr4 counter parts.
Hopefully that means the 4870 will be quite short also. That's a smart move on ati's part to do so, but I'm pretty certain that's going to be the rv740 coming out with gddr3. But either way, I looking for to seeing the 4670's performance. Think about it, 240 shaders, 20 TMUs, and 256 bit memory with the possibility of gddr4. This thing will probably be at the performance of the 3850 or above, especially with af and aa enabled because it's a lot less bottlenecked than the r600 ever was. Plus it should be cheap as hell, so I'm considering to make it my next upgrade if the 4870 is out of the question (gt200 for sure, wayyy too expensive, even if the performance is there, yields will be terrible).
No that's not quite true. The x1900 coolers do not look like the 2900xt cooler, nor the majority of the x1950 coolers, the x1950xtx cooler however is a simpler version of the 2900xt's cooler. I just wish the 2900xtx's cooler would have been put on the 2900xt, that thing was a monster, something like 4 heatpipes and a blower attached, but it was long as hell because of it. And the 3870's cooler is also considerably different, larger better designed fan, no heatpipes, and quite a bit smaller.
However this rv770 cooler and the 2900xt cooler look very similar, but the 4870 has 2 heatpipes while 2900xt has 3, and as someone else pointed out the bases are slightly different. Most likely that's due to the r600 die being considerably larger and thus needing to be placed diagonally to conserve space.
But hey I'm not complaining, if this means low temps and silence, I'm all for it. The 3870 cooler was quiet, but wasn't great at heat removal because it was just a copper block. The 3850 cooler was kinda meh but looked kinda cool. The 8800gt's cooler just kinda...sucked...period. However the g92 gts cooler was both quiet and efficient at removing heat and that's why the g92 gts(s) are selling like hotcakes even though the 8800gt is like $10-$30 cheaper and offers almost the same performance. In other words, I think ati was taking notes on how well the g92 gts was selling compared to the 8800gt and 3870 (relative to how many cards the respective companies produced) and thus decided to do the same.
To be honest the 2 heatpipes probably aren't completely necessary, but once again, I'm not complaining, I just wonder what the new Ice Q 4 turbo cooler for the 4870 will be considering the 4870's stock cooler is better than the Ice q 3 turbo (with the exception of the UV housing)
Lately they added a third heatpipe for the 2900Pro and XT as they use the same cooler, but the original 2900XT cooler was like mine, one of the first Sapphires out there:
http://img85.imageshack.us/img85/289...2121kw1.th.jpg http://img362.imageshack.us/img362/6...2122sa5.th.jpg http://img339.imageshack.us/img339/5...2123gm9.th.jpg
The red plate, the fan... but the heatpipes are bended a bit different though.
RV770 is both impressive and disapointing at the same time.
RV670 Specs:
16 TMUs
320 SP arranged in 4 SIMD arrays of 80sp each
16 ROPs
256Bit DDR3/4 Ram
666Million Transistors on 55nm
RV770 Specs:
32 TMUs
800 SPs arranged in 5 SIMD arrays of 160sp each
16 ROPs
256Bit DDR4/5 Ram
~830Million transistors on 55nm
Now the impressive part is that ATi managed to more than double the number of SPs and double the amount of TMUs and only add about 200Mill transistors. The disapointing part is that overall performance is only expected to be 50% higher than RV670. One would expect more than that especially since RV770 will finally offload AA to the ROPs.
There is no "true next gen" chip for the DX10 generation, even the GT200 is just a revamped G92 with more shaders/ROPs and TMUs. Things in the GPU world will be boring until DX11 comes.
nice for a value card tho' :hrhr: assuming it pans out.Quote:
overall performance is only expected to be 50% higher than RV670
Thing is I don't think ati is offloading aa to the ROPs, that's probably the reason for the high shader increase and no ROP increase, and that could very well mean that there's a need for all those shaders. However I don't think that transistor count is right, there's no way that's possible to more than double the shader count with only adding 200 some transistors
The first story about 50% faster was posted by Fudzilla, I think he beat me with a few hours on that one. I'm not sure what his source told him, but mine was very vague and after hearing some more I was thinking that the "50% faster" was based on the fact that RV770 was believed to have 50% more SPs, I.e. 480 all in all. I can't say that for certain though, it's just a speculation on my behalf.
I've been hearing the figure 480 over and over again, but the context has been a bit different every time. Right now it seems that we're looking at a total of 800 SPs, which means 480 added SPs. If I'm right about my first speculation, we can throw the early stories of 50% faster out the window.
But there's one more thing to it. (I wrote this is in the original "50% faster" article) If you go back and check how much faster each [true] generation than the last one has been, you won't see much higher figures than 50%. With each generation that has passed, the last generation high-end has performed like the new generation mid-range. But with the lack of a genuine high-end chip from AMD/ATI, I can surely understand that most people are terrified by this.
And then we have the fact that AMD/ATI has come up with a way for two chips to share a memory buffer, which at least makes me really eager to see how the dual-GPU R700 will perform, not to mention two cards with four GPUs and just two memory buffers.
I'm very skeptical toward whether AMD/ATI will ever make a new super GPU. It just makes more sense to spread the workload over several GPUs instead, to them. And considering how well the concept of R680 worked out, I can only imagine that AMD/ATI will keep heading down that road. The fact that the RV670 was a poor performer in AA/AF situations shouldn't cloud your opinion of the R680 concept. Just plug it in and it works. No hassle.
//Andreas
Shared memory would eliminate the usual drawback of mirroring identical data that both GPUs use thus saving memory for other things like higher res textures and what-not.
Well, in the case of CrossFire, it would double the amount of memory.
Today the data is mirrored in each buffer, which means that the 3870X2 1GB, really only uses 512MB, and two 3870X2 cards have the same data mirrored four times.
With two GPUs sharing one 1GB buffer, the entire 1GB would be used in CrossFire. Link two of these cards and they would still have just 1GB, even though the buffer is 1+1GB. But we've doubled the buffer, or halved the loss of effective memory. The data would only be mirrored once, instead of four times.
That's theory and rumor though ;)
//Andreas
No more "mini-stuttering" (i suppose) you mean. Which would be awesome.
I think nothing is clear yet. Just guessing.
RV770pro GDDR3 55nm soon. RV770XT 45nm GDDR5 4GHz 680SP, 36TMU in July.
Where are you getting those numbers from? The rv770xt specs you named have absolutely nothing to back up what you have
45nm won't be ready until late Q3 and at that time TSMC's 40nm will also be ready...
I know from insider sources + techreport's own editor also agreed that the RV770 offloads AA to ROPs. The original design of the R600 was supposed to do AA in ROPs but it never worked properly so they were forced to write drivers that used shaders for AA.
Some ppl claim that this wasn't a big performance hit and it was the TMUs limiting the RV670 more than anything. That I'm not so sure about.
Doubt all you want, doesn't make it true. :ROTF:
I have sources who work @ ATi :rolleyes:
DX10 spec is a joke. This is why there isn't a single game that runs faster on DX10 vs DX9 and multi-GPU setups take a huge hit in DX10 as well. This is also why both ATi and nV are holding back on developing any 'true next gen' DX10 chips. Especially with the US economy the way it is, they're playing it safe until DX11.
AA on ROPs would rock so hard, it would roll!
shared mem + AA rops + 2 gpus = epic dual-gpu singlecard win