Results 1 to 25 of 1518

Thread: Official HD 2900 Discussion Thread

Threaded View

  1. #11
    Xtreme Enthusiast
    Join Date
    Sep 2006
    Location
    Nordschleife!
    Posts
    705
    I think a lot of people here are in denial and I kind of understand that. Many of us just waited an waited for something that is, let's face it, a major disappointment. The hype over R600 was something downright insane. Everybody was like "R600 wil be at least 50% faster then 8800gtx and with a much better IQ on top of that".

    A few of weeks ago when it became evident that HD2900xt wouldn't be the G80 killer that everyone took for granted, some argued that future drivers would solve that. More in-depth reviews like the ones at Anand and Beyond3D clearly state that R600 architecture is extremely software dependant:

    Quote Originally Posted by Anandtech
    If it seems like all this reads in a very complicated way, don't worry: it is complex. While AMD has gone to great lengths to build hardware that can efficiently handle parallel data, dependencies pose a problem to realizing peak performance. The compiler might not be able to extract five operations for every VLIW instruction. In the worst case scenario, we could effectively see only one SP per block operating with only four VLIW instructions being issued. This drops our potential operations per clock rate down from 320 at peak to only 64.
    Quote Originally Posted by Anandtech
    But maximizing throughput on the AMD hardware will be much more difficult, and we won't always see peak performance from real code. On the best case level, R600 is able to do 2.5x the work of G80 per clock (320 operations on R600 and 128 on G80). Worst case for code dependency on both architectures gives the G80 a 2x advantage over R600 per clock (64 operations on R600 with 128 on G80).
    Quote Originally Posted by Anandtech
    While NVIDIA focused on maximizing parallelism in this area of graphics, AMD decided to try to extract parallelism inside the instruction stream by using a VLIW approach. AMD's average case will be different depending on the code running, though so many operations are vector based, high utilization can generally be expected.
    Quote Originally Posted by Beyond3D
    While going 5-way scalar has allowed AMD more flexibility in instruction scheduling compared to their previous hardware, that flexibility arguably makes your compiler harder to write, not easier. So as a driver writer you have more packing opportunities -- and I like to think of it almost like a game of Tetris when it comes to a GPU, but only with the thin blocks and with those being variable lengths, and you can sometimes break them up! -- those opportunities need handling in code and your corner cases get harder to find.

    The end result here is a shader core with fairly monstrous peak floating point numbers, by virtue of the unit count in R600, its core clock and the register file of doom, but one where software will have a harder time driving it close to peak. That's not to say it's impossible, and indeed we've managed to write in-house shaders, short and long and with mixtures of channels, register counts and what have you, that run close to max theoretical thoughput. However it's a more difficult proposition for the driver tech team to take care of over the lifetime of the architecture, we argue, than their previous architecture.
    But what people don't realize is that ATI had a billion years to develop proper drivers. It doesn't matter what ATI says. The R600 is very late to the game, unfortunately. That "magic driver" is to be taken with a HUGE grain of salt. The IQ loss is very evident.

    Like I said in another 2900xt thread:

    "the HD2900xt is such a disaster that instead of bringing Nvidia's prices down, it made'em go up. Yesterday the cheapest 8800gts 320 was $250 and now is $295 at newegg:

    http://www.newegg.com/Product/Produc...613&name=320MB

    Before anyone call me a fanboy let me put this: I owned only ATI cards since R300. Not beacause I like ATI and hate Nvidia but just cuz they were better and faster. I too had hope that R600 would be a beast of a GPU but, unfortunately, it isn't. In my book it is a major flop and I truly hope that AMD/ATI get their act together and put some serious competition to both Nvidia and Intel
    Last edited by Caparroz; 05-14-2007 at 10:11 PM.
    Murray Walker: "And there are flames coming from the back of Prost's McLaren as he enters the Swimming Pool."

    James Hunt: "Well, that should put them out then."

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •