except some of the stuff sounds off Water can't cool it? that itself sounds crazy, they just cannot release a card that takes something crazy to keep it cool, it would be too expensive.
just my
except some of the stuff sounds off Water can't cool it? that itself sounds crazy, they just cannot release a card that takes something crazy to keep it cool, it would be too expensive.
just my
The Cardboard Master Crunch with us, the XS WCG team
Intel Core i7 2600k @ 4.5GHz, 16GB DDR3-1600, Radeon 7950 @ 1000/1250, Win 10 Pro x64
Let me educate you on Hexus:Originally Posted by HaLDoL
Originally Posted by Rys
Last edited by turtle; 12-14-2006 at 04:39 PM.
That is all.
Peace and love.
Most people don't seem to realize that with operations streamed through the shader domain, from vertex ops, to geo ops, to pixel ops, the shader core speed MUST BE FASTER than the ROP speed, unless ROP ratio is 1:1(ROP speed is generally same as "core" speed, no?). Those 3 operations must happen before the thread is handed off to the ROP and final buffer. In previous ATI chips, R580, for example, the ratio was 3 pixel ops for one ROP...3:1. However, the R600 should be able to do 4 operations instead of the three that R580 did simultaneously, but R600 is left with the same number of ROPs. Given that we have 64 units that can do these four instructions, at 4x the ALU count, that makes for 256 ops(vs 48 of R580), of which I assume 3 will be re-processed, and 1 handed off to the ROP. I'm a missing a step in the "pipeline" here, but generally texture fetch:ROP =1:1. With the added geometry op, it makes alot of sense that they would then have to largely increase core speeds in order to have a worthwhile product, else it will simply slightly overstep R580, and that just plain doesn't make sense. It should be AT LEAST as fast as G80, if not faster...and that requires a SEVERE boost in clockspeeds. Maybe not 2ghz, but 1600mhz sounds fair to me, and it should be higher if core speed is 800mhz.Originally Posted by turtle
So yes, NN, 4x the ops, and 4 times the clock rate, however, since they need to hide tex-fetch latency alot of that power is simply "not seen" in the end result...it simply makes it more efficient. In order to fully maximize that power, you need to desgin the app to the exact resources of the gpu, and that ain't gonna happen any time soon. they haven't even really maxed out R580 yet!
Last edited by cadaveca; 12-14-2006 at 04:42 PM.
ATI´s dave Orton stated a no prisoner approach to the upcoming R600.
More at beyound3d
4670k 4.6ghz 1.22v watercooled CPU/GPU - Asus Z87-A - 290 1155mhz/1250mhz - Kingston Hyper Blu 8gb -crucial 128gb ssd - EyeFunity 5040x1050 120hz - CM atcs840 - Corsair 750w -sennheiser hd600 headphones - Asus essence stx - G400 and steelseries 6v2 -windows 8 Pro 64bit Best OS used - - 9500p 3dmark11 (one of the 26% that isnt confused on xtreme forums)
That was a rather funny quote turtle =)
Yeah, Geo quotes Orton as saying R600 will have massive bandwidth...Like we didn't know that already just based on GDDR4 alone, although his strength in wording does make it sound like another R300 revolution. My previous guesstimates on bandwidth stand, as they were calculated assuming R600 has a 512-bit bus. Geo estimates 150+Gb/s...and while that may hold true at stock, there is no 1.2ghz rated GDDR4, only 1.1 and 1.4...and many will either buy an overclocked card or overclock themselves, which at 1.4 spec puts the XTX at 180GB/s. In that regard, R600 will crush G80...but it's surely true memory bandwidth isn't everything...but it sure is good for (MS)AA/AF/HDR!Originally Posted by flopper
I'm sure many of you read B3D like I do, but just to throw this up there:
This jives with almost every rumor we've heard (hexus and the like) other than the 64x4 shaders, but does give with Orton's comment about next gen "possibley having 96 shaders", which I find more credible. Also, the little snips i've seen on misc forums from people in the know seemed to hint that 128 shaders was not the case for R600 (ala the old leaked specs), but was the plan for a future part...So perhaps R600 is 32x3, and R680 is 32x4 on 65nm? It would make sense to me...although it's all conjecture at this point.
Originally Posted by Ubermann
That is all.
Peace and love.
Bah... AMD/ATI is procrastinating more and more...
They are gonna get rushed up both gpu and cpu fronts
at this rate
If that really is its spec ATi will at best tie with nVidia. What you all are forgetting is that the G80 is compeltely modular, the only reason why they didnt go 512bit right off the bat is because they couldnt justify the yields.512bit external bus (confirmed), 32 ROPs, 96 Shaders possibly clocked at 2Ghz. Though they could theoretically go to 4:1 ratio (128 Shaders) and clock them at 1.4Ghz, i.e. 2:1 ratio against the core clock.
Core clocked at 700Mhz+.
1GB memory...makes the most sense [on a 512-bit bus].
The original test cards sent to devolpers were 256bit not 384. It wouldnt take much at all for G80 to go full 512bit and I can promise you they are going to. Given that ATi has a serious problem on their hands since it would be ~160 versus at best 128 from ATi. They would need atleast a 25% headstart on the clock speed *atleast* assuming they have the same IPC (which we know ATi wont).
ATi has been surviving on RAM bandwith for several generations and its finally come to bite them in the ass. Its what they get for not changing their shader path since the R300.
The fact that ATi claims they are going "all out against" the G80 is a clear indication to me that they are terrified of it. Most of their products have a 3-4 year delay in the pipe, they wouldnt dare sacrifice their future products (ie going all out) if they werent truly afraid that what they have isnt enough. Granted they might pull another R300 out of their hat to impress their new owners and for their sake I hope they do.
Last edited by Sentential; 12-15-2006 at 11:40 AM.
NZXT Tempest | Corsair 1000W
Creative X-FI Titanium Fatal1ty Pro
Intel i7 2500K Corsair H100
PNY GTX 470 SLi (700 / 1400 / 1731 / 950mv)
Asus P8Z68-V Pro
Kingston HyperX PC3-10700 (4x4096MB)(9-9-9-28 @ 1600mhz @ 1.5v)
Heatware: 13-0-0
Fundamentally, I totally agree with you on the R600 vs G80 raw architectural specs. If that is true, they are very similar sans the scaler/vector shader approach, and perhaps ATi sacrificing more shaders for a higher clock; presumably whatever has better yields and/or performance. Granted we don't know the final clocks on R600, or how mature the drivers are or will be become for either high-end part.
Where we disagree is on bandwidth. I don't think Nvidia will be so quick to 512-bit. As you implied, this would require an increase in rops to 32, as they are tied together in the scalable architecture, and while perhaps it's possible, it would create a large die with all those rops/shaders/larger mc, even on 80nm. It would also cost development time.
I believe Nvidia will go with the same approach they took towards G70, that being they will optical shrink down G80 the way it is to 80nm and get it out the door quickly in 07, with limited needed R&D, and reap the benefits of the smaller die. It would be the smarter business decision imho, as they could fight ATi on the cost front, if not on the performance front with higher clocks and perhaps GDDR4.
I personally don't believe 512-bit will come from nvidia until 65nm at the mid/end of the year...Be that G8x or G90. From there they could shrink that part to 55nm...and the cycle continues.
Last edited by turtle; 12-15-2006 at 12:21 PM.
That is all.
Peace and love.
kinda anticlimatic question:
new AA modes with r600?
16xCSAA on the geforce 8's is really nice.
DFI P965-S/core 2 quad q6600@3.2ghz/4gb gskill ddr2 @ 800mhz cas 4/xfx gtx 260/ silverstone op650/thermaltake xaser 3 case/razer lachesis
If by this you mean nvidia will just tack on the extra units for G81 then yes i agree, but if you mean that R600 will only tie g80 then i disagree, at least if those specs are true. Keep in mind that rumors have placed those 64/96 shaders as being vec4 and not scalar, so almost 4 times as capable as nvidias.Originally Posted by Sentential
That's the question on my mind as well...I mean, with a whole shatload of bw, one would hope there is some new form of MSAA...but even if there isn't, current AA mode improvements ATi has utilized and even just recently released (ASBT/EATM alpha blending AA, adaptive AA, ASM aka Alpha Sharpen Mode etc etc etc...There's like 12 that can be enabled via Ray's Tray Tools) for current and older cards should look good...not to mention if it has support for the dx10.1 spec, which basically ALL pertains to more impressive and effective MSAA.Originally Posted by grimREEFER
Last edited by turtle; 12-15-2006 at 02:47 PM.
That is all.
Peace and love.
R600 looks to be a beast...I can't wait to see some bench #s.
[-AMD Opteron 165 @ 3.0Ghz-]
[-DFI LAN Party UT uNF4 Ultra-D-]
[-AData 4GB (4x1GB) DDR484-]
[-eVGA Geforce 8800GTS 640MB 320-Bit-]
[-Creative Sound Blaster X-Fi XtremeMusic-]
[-Seagate 160GB & 250GB 16MB Cache SATA300-]
[-OCZ GameXStream OCZ700GXSSLI-]
[-DELL 2005FPW LCD Monitor via DVI-D- & Westinghouse 22w3 LCD Monitor via DVI-D]
[-Windows Vista Ultimate x64-]
That sounds very much like their past and this will problaly be the same.Originally Posted by turtle
But who knows, i cant see them making a new gpu if they loose the "crown" when R600 arrives.
Last edited by Ubermann; 12-15-2006 at 03:15 PM.
my friend, totally agreeOriginally Posted by Sentential
G80 arch is great and nvidia will up the shader processors for sure the question is how much will they need to beat ATI and will yields be good enough for that
ATI has fallen off, it all started with the short supply of the X800 cards.......then the short supply of the x1800 cards, the x1900's had good supply but over priced and performance was comparable to a 7900gt wich was $275 at the time...every now and then i like to switch companys around but i havnt since i got my 6800gt....seems like nvidia has been doing everything right after the horrible FX series...eventually ATI name will be gone and everything will be renamed to AMD.......ATI has been around for a while but as of late they are having all kinds of problems.
I see no problem with ATI.
What you seem to not get is the fact that nvidias "384bit" is not comparable to ATi's 512bit.
With nvidia, each "cluster" (or controller) has a 64bit bus with the memory connected to it. You have 6 clusters, thats 64bitx6 and that gets you Nvidia's number.
With ATI, memory acts as one unit with 512bit interface to the memory controller, which has a 1024 bit interface to the GPU (or memory controller, whereever).
And Nvidia is limited architecturally in terms of increasing the amount of memory they can put on their cards, ATI is not.
Originally Posted by Sentential
Last edited by ahmad; 12-18-2006 at 12:49 PM.
My watercooling experience
Water
Scythe Gentle Typhoons 120mm 1850RPM
Thermochill PA120.3 Radiator
Enzotech Sapphire Rev.A CPU Block
Laing DDC 3.2
XSPC Dual Pump Reservoir
Primochill Pro LRT Red 1/2"
Bitspower fittings + water temp sensor
Rig
E8400 | 4GB HyperX PC8500 | Corsair HX620W | ATI HD4870 512MB
I see what I see, and you see what you see. I can't make you see what I see, but I can tell you what I see is not what you see. Truth is, we see what we want to see, and what we want to see is what those around us see. And what we don't see is... well, conspiracies.
ok this this is going no where, I been hearing about the R60 for a ver-very long time now How much longer do they expect us to wait for it? by now atleast there should have been some kind of preview or some pictures of the board floating around but nada nothing so far.... by the way things are going it looks to me like the r600 will only see the light of the day say some times march or aprl of next year
I am sick of waiting for it
way to go ATI and AMD
uhhhh....the R600 is the first unified shader design, and you forgot the R300 which was the 9700Originally Posted by ether.real
Microsoft's homepage can be found at: thesource-dot-ofallevil-dot-com - interesting, no?
Think of something witty and imagine it here.
The R500 is unified, it has 48 (3x16) ALU’s for Vertex or Pixel Shader processing.Originally Posted by Stuperman
Ryzen 9 3900X w/ NH-U14s on MSI X570 Unify
32 GB Patriot Viper Steel 3733 CL14 (1.51v)
RX 5700 XT w/ 2x 120mm fan mod (2 GHz)
Tons of NVMe & SATA SSDs
LG 27GL850 + Asus MG279Q
Meshify C white
when they realized that they needed more time, R400 became R500Originally Posted by ether.real
Ryzen 9 3900X w/ NH-U14s on MSI X570 Unify
32 GB Patriot Viper Steel 3733 CL14 (1.51v)
RX 5700 XT w/ 2x 120mm fan mod (2 GHz)
Tons of NVMe & SATA SSDs
LG 27GL850 + Asus MG279Q
Meshify C white
I agree nVidia's recent cards have been poor in terms of bandwith, Im not sure if I said that before or not but I'll say it now.Originally Posted by ahmad
What I meant by my previous comment is that all nVidia would potentially have to do is add more clusters to get to 512bit. Granted that would be a hell of alot of chips but then again they could use both sides of the card which should be enough.
It'll take a new core and PCB but it will not take much at all for the "G80" type core to match/beat the R600. The R600 on the other hand would be much harder to redesign to tackle an upgraded G80 since their design isnt nearly as modular as nVidia's
NZXT Tempest | Corsair 1000W
Creative X-FI Titanium Fatal1ty Pro
Intel i7 2500K Corsair H100
PNY GTX 470 SLi (700 / 1400 / 1731 / 950mv)
Asus P8Z68-V Pro
Kingston HyperX PC3-10700 (4x4096MB)(9-9-9-28 @ 1600mhz @ 1.5v)
Heatware: 13-0-0
You make it sound very simple to re-design the core and release it again.
Bookmarks