Page 5 of 6 FirstFirst ... 23456 LastLast
Results 101 to 125 of 126

Thread: R600 not going to make it on time...again?

  1. #101
    Xtremely High Voltage Sparky's Avatar
    Join Date
    Mar 2006
    Location
    Ohio, USA
    Posts
    16,040
    except some of the stuff sounds off Water can't cool it? that itself sounds crazy, they just cannot release a card that takes something crazy to keep it cool, it would be too expensive.

    just my
    The Cardboard Master
    Crunch with us, the XS WCG team
    Intel Core i7 2600k @ 4.5GHz, 16GB DDR3-1600, Radeon 7950 @ 1000/1250, Win 10 Pro x64

  2. #102
    Xtreme Addict
    Join Date
    Jan 2005
    Location
    Grand Forks, ND (Yah sure, you betcha)
    Posts
    1,266
    Quote Originally Posted by HaLDoL
    Let me try to educate you on Hexus. Rys is the owner/admin of hexus and beyond3d. Both sites are very respected and reliable sites when it comes to 3d hardware. Beyond3d was previously owned by Dave Baumann. Dave left B3d to Rys because he took a job at ATi. Knowning this, every newspost on hexus about ati hardware is to be taken very seriously and is probably 100% correct as Rys can verify with Dave.

    I know that this news does not fit in your ATi dreams and as every other fanATIc you try to deny these facts by discrediting hexus. By reacting this way you really show that you don't have a clue whatsoever.
    Hexus is reliable, complain at ATI if the news doesn't fit you.
    Let me educate you on Hexus:

    Quote Originally Posted by Rys
    I don't work for Hexus any more, B3D full time now thankyouplease :!: So those stories have zero to do with moi, although I did explain to someone there how to decode ATI chip revisions not long ago which made it in there. 1GHz with sampling silicon and 2GHz target are pretty laughable, IMO.
    Last edited by turtle; 12-14-2006 at 04:39 PM.
    That is all.

    Peace and love.

  3. #103
    I am Xtreme
    Join Date
    Mar 2005
    Location
    Edmonton, Alberta
    Posts
    4,594
    Quote Originally Posted by turtle
    Nvidia's scaler ALUs ("stream processors") are clocked at 1.35ghz on the GTX, 1.2 on the GTS.

    AFAIK ATi is not going to use the straight scaler approach, rather vec3/4/5+scaler building upon R580/R500...True enough, the 2ghz number could be talking about shader domains if ATi took a page out of the Nvidia play book...Which isn't a bad thing at all if true. If the GPU core clock is indeed in the 750-800+ arena, 2ghz shaders doesn't really sound that far-fetched if somehow similar to G80 in that respect....but it could just be a bad translation, or somewhere along the line the info got garbled. I'd def side with the peeps that take that with just a *lil'* bit 'o' sodium chloride.
    Most people don't seem to realize that with operations streamed through the shader domain, from vertex ops, to geo ops, to pixel ops, the shader core speed MUST BE FASTER than the ROP speed, unless ROP ratio is 1:1(ROP speed is generally same as "core" speed, no?). Those 3 operations must happen before the thread is handed off to the ROP and final buffer. In previous ATI chips, R580, for example, the ratio was 3 pixel ops for one ROP...3:1. However, the R600 should be able to do 4 operations instead of the three that R580 did simultaneously, but R600 is left with the same number of ROPs. Given that we have 64 units that can do these four instructions, at 4x the ALU count, that makes for 256 ops(vs 48 of R580), of which I assume 3 will be re-processed, and 1 handed off to the ROP. I'm a missing a step in the "pipeline" here, but generally texture fetch:ROP =1:1. With the added geometry op, it makes alot of sense that they would then have to largely increase core speeds in order to have a worthwhile product, else it will simply slightly overstep R580, and that just plain doesn't make sense. It should be AT LEAST as fast as G80, if not faster...and that requires a SEVERE boost in clockspeeds. Maybe not 2ghz, but 1600mhz sounds fair to me, and it should be higher if core speed is 800mhz.

    So yes, NN, 4x the ops, and 4 times the clock rate, however, since they need to hide tex-fetch latency alot of that power is simply "not seen" in the end result...it simply makes it more efficient. In order to fully maximize that power, you need to desgin the app to the exact resources of the gpu, and that ain't gonna happen any time soon. they haven't even really maxed out R580 yet!
    Last edited by cadaveca; 12-14-2006 at 04:42 PM.

  4. #104
    Xtreme Addict
    Join Date
    Dec 2002
    Posts
    1,250
    ATI´s dave Orton stated a no prisoner approach to the upcoming R600.
    More at beyound3d
    4670k 4.6ghz 1.22v watercooled CPU/GPU - Asus Z87-A - 290 1155mhz/1250mhz - Kingston Hyper Blu 8gb -crucial 128gb ssd - EyeFunity 5040x1050 120hz - CM atcs840 - Corsair 750w -sennheiser hd600 headphones - Asus essence stx - G400 and steelseries 6v2 -windows 8 Pro 64bit Best OS used - - 9500p 3dmark11 (one of the 26% that isnt confused on xtreme forums)

  5. #105
    XS News
    Join Date
    Aug 2004
    Location
    Sweden
    Posts
    2,010
    That was a rather funny quote turtle =)

  6. #106
    Xtreme Addict
    Join Date
    Jan 2005
    Location
    Grand Forks, ND (Yah sure, you betcha)
    Posts
    1,266
    Quote Originally Posted by flopper
    ATI´s dave Orton stated a no prisoner approach to the upcoming R600.
    More at beyound3d
    Yeah, Geo quotes Orton as saying R600 will have massive bandwidth...Like we didn't know that already just based on GDDR4 alone, although his strength in wording does make it sound like another R300 revolution. My previous guesstimates on bandwidth stand, as they were calculated assuming R600 has a 512-bit bus. Geo estimates 150+Gb/s...and while that may hold true at stock, there is no 1.2ghz rated GDDR4, only 1.1 and 1.4...and many will either buy an overclocked card or overclock themselves, which at 1.4 spec puts the XTX at 180GB/s. In that regard, R600 will crush G80...but it's surely true memory bandwidth isn't everything...but it sure is good for (MS)AA/AF/HDR!

    I'm sure many of you read B3D like I do, but just to throw this up there:

    Quote Originally Posted by Natoma View Post
    Well, I've been telling [Geo] on IM for the past couple of months given my own sources on the matter:

    512bit external bus (confirmed), 32 ROPs, 96 Shaders possibly clocked at 2Ghz. Though they could theoretically go to 4:1 ratio (128 Shaders) and clock them at 1.4Ghz, i.e. 2:1 ratio against the core clock.

    Core clocked at 700Mhz+.

    1GB memory...makes the most sense [on a 512-bit bus].

    Trust me. I didn't believe my friend on those specs either. But then, he does know a lot of engineering people in Taiwan and China, and was nearly dead to rights with the R520 and R580. Given what's been coming out in the past few months, as well as what we've seen from G80, I'm starting to believe again.
    This jives with almost every rumor we've heard (hexus and the like) other than the 64x4 shaders, but does give with Orton's comment about next gen "possibley having 96 shaders", which I find more credible. Also, the little snips i've seen on misc forums from people in the know seemed to hint that 128 shaders was not the case for R600 (ala the old leaked specs), but was the plan for a future part...So perhaps R600 is 32x3, and R680 is 32x4 on 65nm? It would make sense to me...although it's all conjecture at this point.

    Quote Originally Posted by Ubermann
    That was a rather funny quote turtle =)
    That is all.

    Peace and love.

  7. #107
    Xtreme Enthusiast
    Join Date
    Feb 2006
    Posts
    606
    Bah... AMD/ATI is procrastinating more and more...

    They are gonna get rushed up both gpu and cpu fronts
    at this rate

  8. #108
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,363
    512bit external bus (confirmed), 32 ROPs, 96 Shaders possibly clocked at 2Ghz. Though they could theoretically go to 4:1 ratio (128 Shaders) and clock them at 1.4Ghz, i.e. 2:1 ratio against the core clock.

    Core clocked at 700Mhz+.

    1GB memory...makes the most sense [on a 512-bit bus].
    If that really is its spec ATi will at best tie with nVidia. What you all are forgetting is that the G80 is compeltely modular, the only reason why they didnt go 512bit right off the bat is because they couldnt justify the yields.

    The original test cards sent to devolpers were 256bit not 384. It wouldnt take much at all for G80 to go full 512bit and I can promise you they are going to. Given that ATi has a serious problem on their hands since it would be ~160 versus at best 128 from ATi. They would need atleast a 25% headstart on the clock speed *atleast* assuming they have the same IPC (which we know ATi wont).

    ATi has been surviving on RAM bandwith for several generations and its finally come to bite them in the ass. Its what they get for not changing their shader path since the R300.

    The fact that ATi claims they are going "all out against" the G80 is a clear indication to me that they are terrified of it. Most of their products have a 3-4 year delay in the pipe, they wouldnt dare sacrifice their future products (ie going all out) if they werent truly afraid that what they have isnt enough. Granted they might pull another R300 out of their hat to impress their new owners and for their sake I hope they do.
    Last edited by Sentential; 12-15-2006 at 11:40 AM.
    NZXT Tempest | Corsair 1000W
    Creative X-FI Titanium Fatal1ty Pro
    Intel i7 2500K Corsair H100
    PNY GTX 470 SLi (700 / 1400 / 1731 / 950mv)
    Asus P8Z68-V Pro
    Kingston HyperX PC3-10700 (4x4096MB)(9-9-9-28 @ 1600mhz @ 1.5v)

    Heatware: 13-0-0

  9. #109
    Xtreme Addict
    Join Date
    Jan 2005
    Location
    Grand Forks, ND (Yah sure, you betcha)
    Posts
    1,266
    Fundamentally, I totally agree with you on the R600 vs G80 raw architectural specs. If that is true, they are very similar sans the scaler/vector shader approach, and perhaps ATi sacrificing more shaders for a higher clock; presumably whatever has better yields and/or performance. Granted we don't know the final clocks on R600, or how mature the drivers are or will be become for either high-end part.

    Where we disagree is on bandwidth. I don't think Nvidia will be so quick to 512-bit. As you implied, this would require an increase in rops to 32, as they are tied together in the scalable architecture, and while perhaps it's possible, it would create a large die with all those rops/shaders/larger mc, even on 80nm. It would also cost development time.

    I believe Nvidia will go with the same approach they took towards G70, that being they will optical shrink down G80 the way it is to 80nm and get it out the door quickly in 07, with limited needed R&D, and reap the benefits of the smaller die. It would be the smarter business decision imho, as they could fight ATi on the cost front, if not on the performance front with higher clocks and perhaps GDDR4.

    I personally don't believe 512-bit will come from nvidia until 65nm at the mid/end of the year...Be that G8x or G90. From there they could shrink that part to 55nm...and the cycle continues.
    Last edited by turtle; 12-15-2006 at 12:21 PM.
    That is all.

    Peace and love.

  10. #110
    Xtreme Addict
    Join Date
    Oct 2004
    Posts
    1,838
    kinda anticlimatic question:
    new AA modes with r600?
    16xCSAA on the geforce 8's is really nice.
    DFI P965-S/core 2 quad q6600@3.2ghz/4gb gskill ddr2 @ 800mhz cas 4/xfx gtx 260/ silverstone op650/thermaltake xaser 3 case/razer lachesis

  11. #111
    Xtreme Member
    Join Date
    Aug 2004
    Location
    rutgers
    Posts
    465
    Quote Originally Posted by Sentential
    If that really is its spec ATi will at best tie with nVidia. .
    If by this you mean nvidia will just tack on the extra units for G81 then yes i agree, but if you mean that R600 will only tie g80 then i disagree, at least if those specs are true. Keep in mind that rumors have placed those 64/96 shaders as being vec4 and not scalar, so almost 4 times as capable as nvidias.

  12. #112
    Xtreme Addict
    Join Date
    Jan 2005
    Location
    Grand Forks, ND (Yah sure, you betcha)
    Posts
    1,266
    Quote Originally Posted by grimREEFER
    kinda anticlimatic question:
    new AA modes with r600?
    16xCSAA on the geforce 8's is really nice.
    That's the question on my mind as well...I mean, with a whole shatload of bw, one would hope there is some new form of MSAA...but even if there isn't, current AA mode improvements ATi has utilized and even just recently released (ASBT/EATM alpha blending AA, adaptive AA, ASM aka Alpha Sharpen Mode etc etc etc...There's like 12 that can be enabled via Ray's Tray Tools) for current and older cards should look good...not to mention if it has support for the dx10.1 spec, which basically ALL pertains to more impressive and effective MSAA.
    Last edited by turtle; 12-15-2006 at 02:47 PM.
    That is all.

    Peace and love.

  13. #113
    Xtreme Member
    Join Date
    Jul 2005
    Location
    Richmond, VA
    Posts
    109
    R600 looks to be a beast...I can't wait to see some bench #s.
    [-AMD Opteron 165 @ 3.0Ghz-]
    [-DFI LAN Party UT uNF4 Ultra-D-]
    [-AData 4GB (4x1GB) DDR484-]
    [-eVGA Geforce 8800GTS 640MB 320-Bit-]
    [-Creative Sound Blaster X-Fi XtremeMusic-]
    [-Seagate 160GB & 250GB 16MB Cache SATA300-]
    [-OCZ GameXStream OCZ700GXSSLI-]
    [-DELL 2005FPW LCD Monitor via DVI-D- & Westinghouse 22w3 LCD Monitor via DVI-D]
    [-Windows Vista Ultimate x64-]

  14. #114
    XS News
    Join Date
    Aug 2004
    Location
    Sweden
    Posts
    2,010
    Quote Originally Posted by turtle
    I believe Nvidia will go with the same approach they took towards G70, that being they will optical shrink down G80 the way it is to 80nm and get it out the door quickly in 07, with limited needed R&D, and reap the benefits of the smaller die. It would be the smarter business decision imho, as they could fight ATi on the cost front, if not on the performance front with higher clocks and perhaps GDDR4.
    That sounds very much like their past and this will problaly be the same.
    But who knows, i cant see them making a new gpu if they loose the "crown" when R600 arrives.
    Last edited by Ubermann; 12-15-2006 at 03:15 PM.

  15. #115
    Xtreme X.I.P.
    Join Date
    Aug 2004
    Location
    Chile
    Posts
    4,151
    Quote Originally Posted by Sentential
    If that really is its spec ATi will at best tie with nVidia. What you all are forgetting is that the G80 is compeltely modular, the only reason why they didnt go 512bit right off the bat is because they couldnt justify the yields.

    The original test cards sent to devolpers were 256bit not 384. It wouldnt take much at all for G80 to go full 512bit and I can promise you they are going to. Given that ATi has a serious problem on their hands since it would be ~160 versus at best 128 from ATi. They would need atleast a 25% headstart on the clock speed *atleast* assuming they have the same IPC (which we know ATi wont).

    ATi has been surviving on RAM bandwith for several generations and its finally come to bite them in the ass. Its what they get for not changing their shader path since the R300.

    The fact that ATi claims they are going "all out against" the G80 is a clear indication to me that they are terrified of it. Most of their products have a 3-4 year delay in the pipe, they wouldnt dare sacrifice their future products (ie going all out) if they werent truly afraid that what they have isnt enough. Granted they might pull another R300 out of their hat to impress their new owners and for their sake I hope they do.
    my friend, totally agree

    G80 arch is great and nvidia will up the shader processors for sure the question is how much will they need to beat ATI and will yields be good enough for that

  16. #116
    Xtreme Guru
    Join Date
    Aug 2005
    Location
    Burbank, CA
    Posts
    3,766
    ATI has fallen off, it all started with the short supply of the X800 cards.......then the short supply of the x1800 cards, the x1900's had good supply but over priced and performance was comparable to a 7900gt wich was $275 at the time...every now and then i like to switch companys around but i havnt since i got my 6800gt....seems like nvidia has been doing everything right after the horrible FX series...eventually ATI name will be gone and everything will be renamed to AMD.......ATI has been around for a while but as of late they are having all kinds of problems.

  17. #117
    XS News
    Join Date
    Aug 2004
    Location
    Sweden
    Posts
    2,010
    I see no problem with ATI.

  18. #118
    Muslim Overclocker
    Join Date
    May 2005
    Location
    Canada
    Posts
    2,786
    What you seem to not get is the fact that nvidias "384bit" is not comparable to ATi's 512bit.

    With nvidia, each "cluster" (or controller) has a 64bit bus with the memory connected to it. You have 6 clusters, thats 64bitx6 and that gets you Nvidia's number.

    With ATI, memory acts as one unit with 512bit interface to the memory controller, which has a 1024 bit interface to the GPU (or memory controller, whereever).

    And Nvidia is limited architecturally in terms of increasing the amount of memory they can put on their cards, ATI is not.

    Quote Originally Posted by Sentential
    If that really is its spec ATi will at best tie with nVidia. What you all are forgetting is that the G80 is compeltely modular, the only reason why they didnt go 512bit right off the bat is because they couldnt justify the yields.

    The original test cards sent to devolpers were 256bit not 384. It wouldnt take much at all for G80 to go full 512bit and I can promise you they are going to. Given that ATi has a serious problem on their hands since it would be ~160 versus at best 128 from ATi. They would need atleast a 25% headstart on the clock speed *atleast* assuming they have the same IPC (which we know ATi wont).

    ATi has been surviving on RAM bandwith for several generations and its finally come to bite them in the ass. Its what they get for not changing their shader path since the R300.

    The fact that ATi claims they are going "all out against" the G80 is a clear indication to me that they are terrified of it. Most of their products have a 3-4 year delay in the pipe, they wouldnt dare sacrifice their future products (ie going all out) if they werent truly afraid that what they have isnt enough. Granted they might pull another R300 out of their hat to impress their new owners and for their sake I hope they do.
    Last edited by ahmad; 12-18-2006 at 12:49 PM.

    My watercooling experience

    Water
    Scythe Gentle Typhoons 120mm 1850RPM
    Thermochill PA120.3 Radiator
    Enzotech Sapphire Rev.A CPU Block
    Laing DDC 3.2
    XSPC Dual Pump Reservoir
    Primochill Pro LRT Red 1/2"
    Bitspower fittings + water temp sensor

    Rig
    E8400 | 4GB HyperX PC8500 | Corsair HX620W | ATI HD4870 512MB


    I see what I see, and you see what you see. I can't make you see what I see, but I can tell you what I see is not what you see. Truth is, we see what we want to see, and what we want to see is what those around us see. And what we don't see is... well, conspiracies.



  19. #119
    Xtreme Enthusiast
    Join Date
    Apr 2006
    Posts
    820
    ok this this is going no where, I been hearing about the R60 for a ver-very long time now How much longer do they expect us to wait for it? by now atleast there should have been some kind of preview or some pictures of the board floating around but nada nothing so far.... by the way things are going it looks to me like the r600 will only see the light of the day say some times march or aprl of next year

    I am sick of waiting for it

    way to go ATI and AMD

  20. #120
    Xtreme Enthusiast
    Join Date
    Mar 2005
    Posts
    706
    Quote Originally Posted by ether.real
    Actually, R600 makes perfect sense because all their unified shader parts have always had an x00 nomenclature.

    R400: Never released, but developed
    R500: Xenon for xbox360
    R600: what we all know and love.
    uhhhh....the R600 is the first unified shader design, and you forgot the R300 which was the 9700
    Microsoft's homepage can be found at: thesource-dot-ofallevil-dot-com - interesting, no?

    Think of something witty and imagine it here.

  21. #121
    Xtreme Mentor
    Join Date
    Feb 2004
    Location
    The Netherlands
    Posts
    2,984
    Quote Originally Posted by Stuperman
    uhhhh....the R600 is the first unified shader design, and you forgot the R300 which was the 9700
    The R500 is unified, it has 48 (3x16) ALU’s for Vertex or Pixel Shader processing.


    Ryzen 9 3900X w/ NH-U14s on MSI X570 Unify
    32 GB Patriot Viper Steel 3733 CL14 (1.51v)
    RX 5700 XT w/ 2x 120mm fan mod (2 GHz)
    Tons of NVMe & SATA SSDs
    LG 27GL850 + Asus MG279Q
    Meshify C white

  22. #122
    Xtreme Member
    Join Date
    Oct 2005
    Posts
    462
    R400 was unified too.

  23. #123
    Xtreme Mentor
    Join Date
    Feb 2004
    Location
    The Netherlands
    Posts
    2,984
    Quote Originally Posted by ether.real
    R400 was unified too.
    when they realized that they needed more time, R400 became R500

    Ryzen 9 3900X w/ NH-U14s on MSI X570 Unify
    32 GB Patriot Viper Steel 3733 CL14 (1.51v)
    RX 5700 XT w/ 2x 120mm fan mod (2 GHz)
    Tons of NVMe & SATA SSDs
    LG 27GL850 + Asus MG279Q
    Meshify C white

  24. #124
    Xtreme Addict
    Join Date
    Nov 2004
    Posts
    1,363
    Quote Originally Posted by ahmad
    What you seem to not get is the fact that nvidias "384bit" is not comparable to ATi's 512bit.

    With nvidia, each "cluster" (or controller) has a 64bit bus with the memory connected to it. You have 6 clusters, thats 64bitx6 and that gets you Nvidia's number.

    With ATI, memory acts as one unit with 512bit interface to the memory controller, which has a 1024 bit interface to the GPU (or memory controller, whereever).

    And Nvidia is limited architecturally in terms of increasing the amount of memory they can put on their cards, ATI is not.
    I agree nVidia's recent cards have been poor in terms of bandwith, Im not sure if I said that before or not but I'll say it now.

    What I meant by my previous comment is that all nVidia would potentially have to do is add more clusters to get to 512bit. Granted that would be a hell of alot of chips but then again they could use both sides of the card which should be enough.

    It'll take a new core and PCB but it will not take much at all for the "G80" type core to match/beat the R600. The R600 on the other hand would be much harder to redesign to tackle an upgraded G80 since their design isnt nearly as modular as nVidia's
    NZXT Tempest | Corsair 1000W
    Creative X-FI Titanium Fatal1ty Pro
    Intel i7 2500K Corsair H100
    PNY GTX 470 SLi (700 / 1400 / 1731 / 950mv)
    Asus P8Z68-V Pro
    Kingston HyperX PC3-10700 (4x4096MB)(9-9-9-28 @ 1600mhz @ 1.5v)

    Heatware: 13-0-0

  25. #125
    XS News
    Join Date
    Aug 2004
    Location
    Sweden
    Posts
    2,010
    You make it sound very simple to re-design the core and release it again.

Page 5 of 6 FirstFirst ... 23456 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •