except some of the stuff sounds off :bs: Water can't cool it? that itself sounds crazy, they just cannot release a card that takes something crazy to keep it cool, it would be too expensive.
just my :2cents:
Printable View
except some of the stuff sounds off :bs: Water can't cool it? that itself sounds crazy, they just cannot release a card that takes something crazy to keep it cool, it would be too expensive.
just my :2cents:
Let me educate you on Hexus:Quote:
Originally Posted by HaLDoL
;)Quote:
Originally Posted by Rys
Most people don't seem to realize that with operations streamed through the shader domain, from vertex ops, to geo ops, to pixel ops, the shader core speed MUST BE FASTER than the ROP speed, unless ROP ratio is 1:1(ROP speed is generally same as "core" speed, no?). Those 3 operations must happen before the thread is handed off to the ROP and final buffer. In previous ATI chips, R580, for example, the ratio was 3 pixel ops for one ROP...3:1. However, the R600 should be able to do 4 operations instead of the three that R580 did simultaneously, but R600 is left with the same number of ROPs. Given that we have 64 units that can do these four instructions, at 4x the ALU count, that makes for 256 ops(vs 48 of R580), of which I assume 3 will be re-processed, and 1 handed off to the ROP. I'm a missing a step in the "pipeline" here, but generally texture fetch:ROP =1:1. With the added geometry op, it makes alot of sense that they would then have to largely increase core speeds in order to have a worthwhile product, else it will simply slightly overstep R580, and that just plain doesn't make sense. It should be AT LEAST as fast as G80, if not faster...and that requires a SEVERE boost in clockspeeds. Maybe not 2ghz, but 1600mhz sounds fair to me, and it should be higher if core speed is 800mhz.Quote:
Originally Posted by turtle
So yes, NN, 4x the ops, and 4 times the clock rate, however, since they need to hide tex-fetch latency alot of that power is simply "not seen" in the end result...it simply makes it more efficient. In order to fully maximize that power, you need to desgin the app to the exact resources of the gpu, and that ain't gonna happen any time soon. they haven't even really maxed out R580 yet!
ATI´s dave Orton stated a no prisoner approach to the upcoming R600.
More at beyound3d
That was a rather funny quote turtle =)
Yeah, Geo quotes Orton as saying R600 will have massive bandwidth...Like we didn't know that already just based on GDDR4 alone, although his strength in wording does make it sound like another R300 revolution. My previous guesstimates on bandwidth stand, as they were calculated assuming R600 has a 512-bit bus. Geo estimates 150+Gb/s...and while that may hold true at stock, there is no 1.2ghz rated GDDR4, only 1.1 and 1.4...and many will either buy an overclocked card or overclock themselves, which at 1.4 spec puts the XTX at 180GB/s. In that regard, R600 will crush G80...but it's surely true memory bandwidth isn't everything...but it sure is good for (MS)AA/AF/HDR!Quote:
Originally Posted by flopper
I'm sure many of you read B3D like I do, but just to throw this up there:
This jives with almost every rumor we've heard (hexus and the like) other than the 64x4 shaders, but does give with Orton's comment about next gen "possibley having 96 shaders", which I find more credible. Also, the little snips i've seen on misc forums from people in the know seemed to hint that 128 shaders was not the case for R600 (ala the old leaked specs), but was the plan for a future part...So perhaps R600 is 32x3, and R680 is 32x4 on 65nm? It would make sense to me...although it's all conjecture at this point.
:D :toast:Quote:
Originally Posted by Ubermann
Bah... AMD/ATI is procrastinating more and more...
They are gonna get rushed up both gpu and cpu fronts
at this rate :(
If that really is its spec ATi will at best tie with nVidia. What you all are forgetting is that the G80 is compeltely modular, the only reason why they didnt go 512bit right off the bat is because they couldnt justify the yields.Quote:
512bit external bus (confirmed), 32 ROPs, 96 Shaders possibly clocked at 2Ghz. Though they could theoretically go to 4:1 ratio (128 Shaders) and clock them at 1.4Ghz, i.e. 2:1 ratio against the core clock.
Core clocked at 700Mhz+.
1GB memory...makes the most sense [on a 512-bit bus].
The original test cards sent to devolpers were 256bit not 384. It wouldnt take much at all for G80 to go full 512bit and I can promise you they are going to. Given that ATi has a serious problem on their hands since it would be ~160 versus at best 128 from ATi. They would need atleast a 25% headstart on the clock speed *atleast* assuming they have the same IPC (which we know ATi wont).
ATi has been surviving on RAM bandwith for several generations and its finally come to bite them in the ass. Its what they get for not changing their shader path since the R300.
The fact that ATi claims they are going "all out against" the G80 is a clear indication to me that they are terrified of it. Most of their products have a 3-4 year delay in the pipe, they wouldnt dare sacrifice their future products (ie going all out) if they werent truly afraid that what they have isnt enough. Granted they might pull another R300 out of their hat to impress their new owners and for their sake I hope they do.
Fundamentally, I totally agree with you on the R600 vs G80 raw architectural specs. If that is true, they are very similar sans the scaler/vector shader approach, and perhaps ATi sacrificing more shaders for a higher clock; presumably whatever has better yields and/or performance. Granted we don't know the final clocks on R600, or how mature the drivers are or will be become for either high-end part.
Where we disagree is on bandwidth. I don't think Nvidia will be so quick to 512-bit. As you implied, this would require an increase in rops to 32, as they are tied together in the scalable architecture, and while perhaps it's possible, it would create a large die with all those rops/shaders/larger mc, even on 80nm. It would also cost development time.
I believe Nvidia will go with the same approach they took towards G70, that being they will optical shrink down G80 the way it is to 80nm and get it out the door quickly in 07, with limited needed R&D, and reap the benefits of the smaller die. It would be the smarter business decision imho, as they could fight ATi on the cost front, if not on the performance front with higher clocks and perhaps GDDR4.
I personally don't believe 512-bit will come from nvidia until 65nm at the mid/end of the year...Be that G8x or G90. From there they could shrink that part to 55nm...and the cycle continues.
kinda anticlimatic question:
new AA modes with r600?
16xCSAA on the geforce 8's is really nice.
If by this you mean nvidia will just tack on the extra units for G81 then yes i agree, but if you mean that R600 will only tie g80 then i disagree, at least if those specs are true. Keep in mind that rumors have placed those 64/96 shaders as being vec4 and not scalar, so almost 4 times as capable as nvidias.Quote:
Originally Posted by Sentential
That's the question on my mind as well...I mean, with a whole shatload of bw, one would hope there is some new form of MSAA...but even if there isn't, current AA mode improvements ATi has utilized and even just recently released (ASBT/EATM alpha blending AA, adaptive AA, ASM aka Alpha Sharpen Mode etc etc etc...There's like 12 that can be enabled via Ray's Tray Tools) for current and older cards should look good...not to mention if it has support for the dx10.1 spec, which basically ALL pertains to more impressive and effective MSAA.Quote:
Originally Posted by grimREEFER
R600 looks to be a beast...I can't wait to see some bench #s.
That sounds very much like their past and this will problaly be the same.Quote:
Originally Posted by turtle
But who knows, i cant see them making a new gpu if they loose the "crown" when R600 arrives.
:toast: my friend, totally agreeQuote:
Originally Posted by Sentential
G80 arch is great and nvidia will up the shader processors for sure the question is how much will they need to beat ATI and will yields be good enough for that
ATI has fallen off, it all started with the short supply of the X800 cards.......then the short supply of the x1800 cards, the x1900's had good supply but over priced and performance was comparable to a 7900gt wich was $275 at the time...every now and then i like to switch companys around but i havnt since i got my 6800gt....seems like nvidia has been doing everything right after the horrible FX series...eventually ATI name will be gone and everything will be renamed to AMD.......ATI has been around for a while but as of late they are having all kinds of problems.
I see no problem with ATI.
What you seem to not get is the fact that nvidias "384bit" is not comparable to ATi's 512bit.
With nvidia, each "cluster" (or controller) has a 64bit bus with the memory connected to it. You have 6 clusters, thats 64bitx6 and that gets you Nvidia's number.
With ATI, memory acts as one unit with 512bit interface to the memory controller, which has a 1024 bit interface to the GPU (or memory controller, whereever).
And Nvidia is limited architecturally in terms of increasing the amount of memory they can put on their cards, ATI is not.
Quote:
Originally Posted by Sentential
ok this this is going no where, :stick: I been hearing about the R60 for a ver-very long time now :horse: How much longer do they expect us to wait for it? :mad: by now atleast there should have been some kind of preview or some pictures of the board floating around but nada nothing so far.... by the way things are going it looks to me like the r600 will only see the light of the day say some times march or aprl of next year :fact:
I am sick of waiting for it :coffee:
way to go ATI and AMD :rolleyes:
uhhhh....the R600 is the first unified shader design, and you forgot the R300 which was the 9700 ;)Quote:
Originally Posted by ether.real
The R500 is unified, it has 48 (3x16) ALU’s for Vertex or Pixel Shader processing.Quote:
Originally Posted by Stuperman
http://www.theinquirer.net/images/articles/r500.jpg
R400 was unified too.
when they realized that they needed more time, R400 became R500Quote:
Originally Posted by ether.real
I agree nVidia's recent cards have been poor in terms of bandwith, Im not sure if I said that before or not but I'll say it now.Quote:
Originally Posted by ahmad
What I meant by my previous comment is that all nVidia would potentially have to do is add more clusters to get to 512bit. Granted that would be a hell of alot of chips but then again they could use both sides of the card which should be enough.
It'll take a new core and PCB but it will not take much at all for the "G80" type core to match/beat the R600. The R600 on the other hand would be much harder to redesign to tackle an upgraded G80 since their design isnt nearly as modular as nVidia's
You make it sound very simple to re-design the core and release it again.