MMM
Page 11 of 22 FirstFirst ... 89101112131421 ... LastLast
Results 251 to 275 of 526

Thread: HD5870 and HD5850 Reviews

  1. #251
    I am Xtreme
    Join Date
    Dec 2007
    Posts
    7,750
    when 5850 hits 200$ i might have to make the jump, cant wait to see if they make 2GB versions or what it will be like.

  2. #252
    Xtreme Guru
    Join Date
    Aug 2007
    Posts
    3,562
    Quote Originally Posted by Jamesrt2004 View Post
    meh you know what he means.. compare the percentage of people with normal C2D and C2Q's etc at normal 2.4~3.2 ghz compared to people with i7 @ like 4.2 lol

    I like him and im sure a lot of other people would like to see results on say a Q6600 @ 3.2 or 3.6 or something a little more.. realistic for a lot of us

    What the use of putting up a subjective review where the top 3 cards nicely smash into a CPU bottleneck again and again? The reason reviewers push their chips to high clocks is to get a differentiation between the cards' scores. Due to the nature of the majority of today's games, many high end cards will bottleneck at 1680 and even 1920 resolution in some cases even with an i7 @ 4Ghz let alone a C2Q.

  3. #253
    Xtreme Addict
    Join Date
    Jan 2004
    Location
    somewhere in Massachusetts
    Posts
    1,009
    Considering ATI/AMD usually gain quite a bit over the first 3-6 months with driver tweaks, especially for the X2/Crossfire implementation, this looks damn good!

    You get 4870 X2 / GTX 295 level performance with less power draw from a single card for money about equal to what the X2 goes for in most places (and about $100 less than the 295)

    I'm a bit skeptical about how a single 4870 will drive 6 displays well (keyword: well) but I guess I'll just have to wait & see.

  4. #254
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by jmke View Post
    only 2, Crysis, Crysis Warhead, with high IQ
    1. crysis and crysis warhead are the same, its like calling hl2 ep1 and ep2 diferent games :P

    2. what settings? at 1680x1050 and 1920x1200 4aa 16af a 5850 has about the same min fps as a gtx285 and 4890, only higher av fps... but even on the 5850 its unplayable imo... and even at 1280x1024 where the 5850 pulls ahead i wouldnt really call 28min and 40av fps playable... both anandtech and xbitlabs get about the same numbers there...

    what you really want for crysis is a 5870 or gtx295... ideally gtx285SLI or 5870CF i guess... but its not worth it, its the only game that needs this and its not like its such a great game giving you weeks of funtime...

    Quote Originally Posted by SKYMTL View Post
    While I haven't read it, I am sure the games showing the most improvement are the ones that were either new releases or not even released when the HD 4870 was brought to market.
    no, the games were out before the launch... check it out, i was surprised myself... there are def 2 cases or so where a game was really slow and then 8.8 gave a big boost and probably fixed something and then there were barely improvements, but q couple of games saw 10% boosts from 8.7 to i think up to 8.12 is what they tested?

    Quote Originally Posted by kiwi View Post
    Back then there were no x2 solutions cause x2 is just a cheat Unless they drastically reduce power consumption.

    With cpu cores they did that. More cores but +/- same power. But with GPUs they can't (yet). You get the point OK, going a bit offtopic here
    yeh ok... those days are def over...

    flippin_waffles, tpu tested with i7 at 3.8, at with 920@3.33 and xbit with 965@3.2, and hwcanucks tested with 920@4g iirc... so anandtech and xbitlabs should be fine for you... there was a link to some dutch site that tested on an amd phenom2 as well, and there was almost no diference between 285, 4890, 295 and 5870 in many games, cause they were cpu bottlenecked... not sure what clocks you need with an i7 to see diferences between the cards, but from AT and Xbit we can see that at 3.2 theres already a notable diference...

    Quote Originally Posted by Boissez View Post
    Well either the 8800GT is 20% faster than the 9800GT... or the numbers are somehow wrong.
    yes, theres def something wrong... everything above 5870 seems too close to each other and doesnt really scale up... the only way to get this is if the cpu is limiting... at tested with their cpu at 3.33 and they see a bigger scaling above 5870... so idk what cpus other reviews tested with to draw the overall scores down...

    Quote Originally Posted by Mech0z View Post
    hmmm that heatsink looks nice design wise, but cooling perf?
    probably better than the restricted airflow stock design, but doesnt look great... especially the blue green pcb is... yuck...

    Quote Originally Posted by Jamesrt2004 View Post
    meh you know what he means.. compare the percentage of people with normal C2D and C2Q's etc at normal 2.4~3.2 ghz compared to people with i7 @ like 4.2 lol

    I like him and im sure a lot of other people would like to see results on say a Q6600 @ 3.2 or 3.6 or something a little more.. realistic for a lot of us
    whos seriously into games and runs a c2d or c2q below 3ghz... pff come on and i7 isnt really faster in games compared to c2d/c2q clock for clock...

    Quote Originally Posted by SKYMTL View Post
    What the use of putting up a subjective review where the top 3 cards nicely smash into a CPU bottleneck again and again? The reason reviewers push their chips to high clocks is to get a differentiation between the cards' scores. Due to the nature of the majority of today's games, many high end cards will bottleneck at 1680 and even 1920 resolution in some cases even with an i7 @ 4Ghz let alone a C2Q.
    i know its a lot of work... but could you maybe do at least one or a few tests with lower cpu clocks? i think that would actually be a really nice thing to see and very helpful... cause people think card a and b are notably diferent but they might not be if they have a stock cpu at 2.5ghz (check the steam hw survey)

    just a suggestion... would def be interesting to see if some cards scale more with faster cpus than others as well for example... so maybe some cards dont need a fast cpu and are a better upgrade for an old system than others... get what i mean?

  5. #255
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    2,128
    Quote Originally Posted by Wesker View Post
    I was expecting further improvements efficiency. Not only does it share similar paper specifications as R700, but it also share's RV790's core clock speed.

    R600 -> RV770 was impossible? RV770's performance over R600 was greater than the 2.5x increase in the ALU count. Most of RV770's performance wins over R600 came from internal improvements in the chip design (most notable being the inclusion of hardware based MSAA resolve).

    Years ago, were you also around saying that the performance of chips like Conroe, Hammer, Nehalem, R580, R300 and G80 were impossible?

    ...oh, and thanks for the attitude.
    You expected something which wasn't in the card. The only (known, and fact) changes were the "double RV790", so no possibility for efficiency improvements. Using the same architecture and doubling stuff does not increase the efficiency, despite the "More than twice as complex scheduler as before".

    But yeah, if you're disappointed with nearly 2x perf. increase over the last gen, it's your thing afterall.

    Let's wait for R900 and new(?) µArch, hopefully there will be changes in the efficiency.

  6. #256
    Xtreme Cruncher
    Join Date
    Sep 2008
    Location
    Kansas City, Missouri
    Posts
    2,122
    What I love the look of that cooler while I wish they would of made the PCB straight up black you cant always get what you want, but that card looks sick.
    ~ Little Slice of Heaven ~
    Lian Li PC-A05NB w/ Gentle Typhoons
    Core i7 860 @ 3gHz w/ Thor's Hammer
    eVGA P55 SLI
    8GB RAM
    Gigabyte 7970
    Corsair HX850
    OZC Vertex3 SSD 240GB

    ~~~~~~~~~~~~

  7. #257
    Xtreme Addict
    Join Date
    Jan 2009
    Location
    Italia
    Posts
    1,021

  8. #258
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by Arkangyl View Post
    Considering ATI/AMD usually gain quite a bit over the first 3-6 months with driver tweaks, especially for the X2/Crossfire implementation, this looks damn good!

    You get 4870 X2 / GTX 295 level performance with less power draw from a single card for money about equal to what the X2 goes for in most places (and about $100 less than the 295)

    I'm a bit skeptical about how a single 4870 will drive 6 displays well (keyword: well) but I guess I'll just have to wait & see.
    no, you get slightly less perf than a dual gpu card, and it costs slightly more than a 4870x2 :P

    like i said, if the price would match that of the 4870x2, that would make it a great deal...

  9. #259
    Xtreme Enthusiast
    Join Date
    Feb 2005
    Posts
    970
    Another thing i'd like to get off my mind, is why the heck do they design such fancy coolers, and then make the card so you either have to stand on your head to see it once they are in your computer, or hang your box from the cieling. Why can't cards be designed so the bloody cooler is on top? Surely it can't be that difficult to flip everything around, and there's plenty of PCI-E slots to accomodate such a thing if there's no room between the top slot and CPU cooler!!

  10. #260
    Xtreme Member
    Join Date
    Mar 2007
    Location
    Pilipinas
    Posts
    445
    Quote Originally Posted by saaya View Post
    no, you get slightly less perf than a dual gpu card, and it costs slightly more than a 4870x2 :P

    like i said, if the price would match that of the 4870x2, that would make it a great deal...
    You pay more up front but spend less on electricity in the long run especially @idle, and most likely less driver headaches... that's already a great deal methinks

  11. #261
    Xtreme Mentor
    Join Date
    Feb 2007
    Location
    Oxford, England
    Posts
    3,433
    Quote Originally Posted by SKYMTL View Post
    What the use of putting up a subjective review where the top 3 cards nicely smash into a CPU bottleneck again and again? The reason reviewers push their chips to high clocks is to get a differentiation between the cards' scores. Due to the nature of the majority of today's games, many high end cards will bottleneck at 1680 and even 1920 resolution in some cases even with an i7 @ 4Ghz let alone a C2Q.
    but even at the bottleneck we then know what to expect, or if its worth it or not if its say 60fps at 1680x1050 I would be like woah... but then realise im not running a beasty i7 under the hood and probably get less then that

    not having a go I see why they have to kinda do it, but still nice to say have a review with more ... realistic scores to what more users would expect?
    "Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
    //James

  12. #262
    Xtreme Addict
    Join Date
    Jan 2004
    Posts
    1,313
    Quote Originally Posted by SKYMTL View Post
    What the use of putting up a subjective review where the top 3 cards nicely smash into a CPU bottleneck again and again? The reason reviewers push their chips to high clocks is to get a differentiation between the cards' scores. Due to the nature of the majority of today's games, many high end cards will bottleneck at 1680 and even 1920 resolution in some cases even with an i7 @ 4Ghz let alone a C2Q.
    ^^^^
    Yeah what he says.

    Ever since 4870x2 launched on August 12, 2008 - there has been little motivation to upgrade. As appealing as dual 30" 2560x1600 may be, or 8xAA.. its the very definition of excessive. Heck most "chumps" with 22-24" LCD or 50" HDTV are stuck at 1920x1080.. making 25x16 results irrelevant.

    Look at the 5870 and 5870 CF benchmarks:
    - Half the games are CPU limited.. all Dual-GPU/CF converge at some crazy high 150-300fps limit.
    - Another big portion like Fallout3, FarCry2, STALKER, RE5, Batman, and especially HAWX show HD 5870 bandwidth starved and falling far behind 4870x2.
    - Finally, very shader intensive Crysis and others show HD 5870 taking clear lead.

    But whats really the relevance if HD 5870 is 10% or 50% faster than GTX285 at 2560x1600 8xAA. 40 fps vs 30fps looks impressive, but neither is playabe. And even if it was, AND you had such a monitor, would $$$ justify the higher resolution.

    Both put up similar 70-100 fps avg at 19x10 4xAA in virtually every game (Crysis excluded ofcourse).

    Bottom line: regardless of how high HD 5870, or GT300 score, until users upgrade from existing 1080p displays, there's little benefit to upgrading existing GTX285/GTX275/HD4890/HD4870.

    ... until ofcourse next must have game like Half Life 3 or Doom4 requires DX11

    24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
    1 GB OCZ Gold (='.'=) 240 2-2-2-5
    Giga-byte NF3 (")_(") K8NSC-939
    XFX 6800 16/6 NV5 @420/936, 1.33V

  13. #263
    Xtreme Member
    Join Date
    Jan 2008
    Location
    Shin Osaka, Japan
    Posts
    152
    Quote Originally Posted by Calmatory View Post
    You expected something which wasn't in the card. The only (known, and fact) changes were the "double RV790", so no possibility for efficiency improvements. Using the same architecture and doubling stuff does not increase the efficiency, despite the "More than twice as complex scheduler as before".

    But yeah, if you're disappointed with nearly 2x perf. increase over the last gen, it's your thing afterall.

    Let's wait for R900 and new(?) µArch, hopefully there will be changes in the efficiency.
    Again I bring up RV770. The only known feature about that chip (just before and during launch) was its 2.5x increase in ALU count. Impressive in its own right, however it brought more than a 2.5x performance increase over R600 and RV670.

    Also I was comparing the jump from R700 (2xRV770) not 2xRV790. In this light (2xRV790), Cypress looks even more unimpressive, with an average 1.6x increase over RV790 (they share the same core clocks, so it's quite fair to compare the two architectures).

    Somehow, a 40% theoretical performance advantage was lost during the 100% increase in specifications count.

    As I said in my original drivers, it could just be the beta drivers not showing Cypress' fullest potential. But given that R600, RV770 and Cypress share the same underlying base architecture, I don't think there's too much to be squeezed out of new drivers. Still, I could be surprised...
    Quote Originally Posted by flippin_waffles on Intel's 32nm process and new process nodes
    1 or 2 percent of total volume like intel likes to do. And with the trouble intel seems to be having with they're attempt, it [32nm] doesn't look like a very mature process.
    AMD has always been quicker to a mature process and crossover point, so by the time intel gets their issues and volume sorted out, AMD won't be very far behind at all.

  14. #264
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    2,128
    Quote Originally Posted by flippin_waffles View Post
    Another thing i'd like to get off my mind, is why the heck do they design such fancy coolers, and then make the card so you either have to stand on your head to see it once they are in your computer, or hang your box from the cieling. Why can't cards be designed so the bloody cooler is on top? Surely it can't be that difficult to flip everything around, and there's plenty of PCI-E slots to accomodate such a thing if there's no room between the top slot and CPU cooler!!
    PCI standard. In ISA cards the components are on top of the card.

    Why do you bother looking at the card in the first place?

    Quote Originally Posted by Wesker View Post
    Again I bring up RV770. The only known feature about that chip (just before and during launch) was its 2.5x increase in ALU count. Impressive in its own right, however it brought more than a 2.5x performance increase over R600 and RV670.

    Also I was comparing the jump from R700 (2xRV770) not 2xRV790. In this light (2xRV790), Cypress looks even more unimpressive, with an average 1.6x increase over RV790 (they share the same core clocks, so it's quite fair to compare the two architectures).

    Somehow, a 40% theoretical performance advantage was lost during the 100% increase in specifications count.

    As I said in my original drivers, it could just be the beta drivers not showing Cypress' fullest potential. But given that R600, RV770 and Cypress share the same underlying base architecture, I don't think there's too much to be squeezed out of new drivers. Still, I could be surprised...
    There are bottlenecks in the system. For example the CPU and PCI-E bandwidth. PCI-E 2.0 x16 is showing it's limits, when the PCI-E frequency yields gains in FPS.

    Basically when the theoretical performance doubles(as it has been with RV790 -> RV870), the practical performance is determined by the theoretical max-the bottlenecks(CPU, PCI-E, drivers to some extent). I'm quite sure that faster CPU, more PCI-E bandwidth and 1-3 releases more mature drivers will yield notable improvements for RV870. Until then, it comes down to sticking with what we have got now and wishing for the best. As far as I know, there has been some issues with load balancing between SPs, and that not all the potential performance can be used from the cards, due to the nature of the SP clusters.

    Hopefully the prices of the cards will go down for christmas, even though I won't be getting one anyway.
    Last edited by Calmatory; 09-23-2009 at 08:51 AM.

  15. #265
    Registered User
    Join Date
    Apr 2006
    Posts
    64
    "The fact of the matter is that most poor performance scenarios for today’s GPUs are the fault of poorly coded games rather than a lack of processing horsepower." - HardewareCanucks

    I've been saying the same crap for years now. It's like smashing an ant with a sledgehammer. You know, don't fix the code, just add another card via sli and you'll be happy. Oh well, the 5800 series looks solid for the price and the power consumption is fantastic at those performance levels, but I really don't see a reason to upgrade if you have a last generation card and game at 1900x1200 or lower. For those of you that do, it's your money and I'm sure you'll enjoy the new toy...
    TANDY PC
    Intel 486 SX 25
    4 MB RAM
    Trident 512K SVGA
    120 MB Seagate HD
    14 Inch CTX CRT Monitor
    14.4K Modem (too slow for :banana::banana::banana::banana:)
    Radio Shack 2.0 Speakers (6V Battery Operated)
    OS: MS DOS 6.2
    Games: X-wing, Wing Commander, Veil of Darkness, Kings Quest, Zork

  16. #266
    Xtreme Guru
    Join Date
    Aug 2007
    Posts
    3,562
    Quote Originally Posted by saaya View Post
    just a suggestion... would def be interesting to see if some cards scale more with faster cpus than others as well for example... so maybe some cards dont need a fast cpu and are a better upgrade for an old system than others... get what i mean?

    Yes. I intend to do this with the i7 920 I have. Everything from stock to 4.3Ghz. However, I intend to wait until the HD 5850 and HD 5870X2 launch.

  17. #267
    Xtreme Addict
    Join Date
    Apr 2007
    Posts
    2,128
    Quote Originally Posted by omega1alpha View Post
    "The fact of the matter is that most poor performance scenarios for today’s GPUs are the fault of poorly coded games rather than a lack of processing horsepower." - HardewareCanucks

    I've been saying the same crap for years now. It's like smashing an ant with a sledgehammer. You know, don't fix the code, just add another card via sli and you'll be happy. Oh well, the 5800 series looks solid for the price and the power consumption is fantastic at those performance levels, but I really don't see a reason to upgrade if you have a last generation card and game at 1900x1200 or lower. For those of you that do, it's your money and I'm sure you'll enjoy the new toy...
    It's nice to see people whining at programmers. The guys work with limited resources(Financial and time), usually over 60 hour weeks(esp. during the crunch time) doing what they can. Possibly one of the hardest professions to master, poorly paid. And yeah, "code better" is what they get, and seemingly deserve from the people they do the work for.

    Though, I guess it's quite broadly agreed that the companies
    decisions are to blame, not really the people doing the hard work. Bad happens, poor field to work on.

    Edit: Oh meh, just debunked it all.
    Last edited by Calmatory; 09-23-2009 at 09:03 AM.

  18. #268
    Xtreme Addict
    Join Date
    Apr 2005
    Posts
    1,087
    Techpowerup's review shows that PCI-E x4, x8, x16 doesn't have much difference in performance.


    All systems sold. Will be back after Sandy Bridge!

  19. #269
    Xtreme X.I.P.
    Join Date
    Nov 2002
    Location
    Shipai
    Posts
    31,147
    Quote Originally Posted by SKYMTL View Post
    Yes. I intend to do this with the i7 920 I have. Everything from stock to 4.3Ghz. However, I intend to wait until the HD 5850 and HD 5870X2 launch.
    cool!
    dont wait for the 5850x2 tho... who knows when itll come, as soon as you can get 2 5850s just run them in CF

    and i think 2.4 3.2 and 4.0 would probably be enough, i suspect barely any scaling between 2.4 and 3.2... after you ran the tests 2.8 might make sense to see where it stops scaling... from what i remember even 2.4 to 2.8 should be barely a diference... but then again, high res might be interesting... and CF and SLI need more cpu power... def interesting!

  20. #270
    Xtreme Enthusiast
    Join Date
    Feb 2005
    Posts
    970
    Quote Originally Posted by ***Deimos*** View Post
    ^^^^
    Yeah what he says.

    Ever since 4870x2 launched on August 12, 2008 - there has been little motivation to upgrade. As appealing as dual 30" 2560x1600 may be, or 8xAA.. its the very definition of excessive. Heck most "chumps" with 22-24" LCD or 50" HDTV are stuck at 1920x1080.. making 25x16 results irrelevant.

    Look at the 5870 and 5870 CF benchmarks:
    - Half the games are CPU limited.. all Dual-GPU/CF converge at some crazy high 150-300fps limit.
    - Another big portion like Fallout3, FarCry2, STALKER, RE5, Batman, and especially HAWX show HD 5870 bandwidth starved and falling far behind 4870x2.
    - Finally, very shader intensive Crysis and others show HD 5870 taking clear lead.

    But whats really the relevance if HD 5870 is 10% or 50% faster than GTX285 at 2560x1600 8xAA. 40 fps vs 30fps looks impressive, but neither is playabe. And even if it was, AND you had such a monitor, would $$$ justify the higher resolution.

    Both put up similar 70-100 fps avg at 19x10 4xAA in virtually every game (Crysis excluded ofcourse).

    Bottom line: regardless of how high HD 5870, or GT300 score, until users upgrade from existing 1080p displays, there's little benefit to upgrading existing GTX285/GTX275/HD4890/HD4870.

    ... until ofcourse next must have game like Half Life 3 or Doom4 requires DX11
    Well, I suppose if reviewers like SKYMTL have this attitude that it's pointless to show hardware without showing it's maximum potential, all while ignoring the vast majority of consumers who purchase this card, why don't they show the true potential by running 3 and 6 monitor setups, to really maximize the cards potential. I've said before, the only reason I would even think about upgrading my GPU is for the multimonitor support. I've completely lost interest in seeing how big of a number I can get, and the number of people in my category seems to be growing fast. I'd rather see how new hardware would affect ME not live vicariously through a select few on the internet. Or come up with something new, be innovative. [H] tried, but they failed miserably, but at least they tried. Sticking to the same method that's been used in reviews for the last decade is failing.

    [edit on SKYMTL's other response]

    Good stuff, now throw in some AMD processors, Intel's last generation processors, and a few dual cores and now we're talking.
    Last edited by flippin_waffles; 09-23-2009 at 09:07 AM.

  21. #271
    Xτræmε ÇruñcheΓ
    Join Date
    Jul 2005
    Location
    Molvanîa
    Posts
    2,849
    Quote Originally Posted by ***Deimos*** View Post
    ^^^^
    Yeah what he says.

    Ever since 4870x2 launched on August 12, 2008 - there has been little motivation to upgrade. As appealing as dual 30" 2560x1600 may be, or 8xAA.. its the very definition of excessive. Heck most "chumps" with 22-24" LCD or 50" HDTV are stuck at 1920x1080.. making 25x16 results irrelevant.

    Look at the 5870 and 5870 CF benchmarks:
    - Half the games are CPU limited.. all Dual-GPU/CF converge at some crazy high 150-300fps limit.
    - Another big portion like Fallout3, FarCry2, STALKER, RE5, Batman, and especially HAWX show HD 5870 bandwidth starved and falling far behind 4870x2.
    - Finally, very shader intensive Crysis and others show HD 5870 taking clear lead.

    But whats really the relevance if HD 5870 is 10% or 50% faster than GTX285 at 2560x1600 8xAA. 40 fps vs 30fps looks impressive, but neither is playabe. And even if it was, AND you had such a monitor, would $$$ justify the higher resolution.

    Both put up similar 70-100 fps avg at 19x10 4xAA in virtually every game (Crysis excluded ofcourse).

    Bottom line: regardless of how high HD 5870, or GT300 score, until users upgrade from existing 1080p displays, there's little benefit to upgrading existing GTX285/GTX275/HD4890/HD4870.

    ... until ofcourse next must have game like Half Life 3 or Doom4 requires DX11
    unless games start requiring better graphics cards in the future

    you know
    later
    i7 2700k 4.60ghz -- Z68XP-UD4 F6F -- Ripjaws 2x4gb 1600mhz -- 560 Ti 448 stock!? -- Liquid Cooling Apogee XT -- Claro+ ATH-M50s -- U2711 2560x1440
    Majestouch 87 Blue -- Choc Mini Brown -- Poker Red -- MX11900 -- G9

  22. #272
    Xtreme Addict
    Join Date
    Jan 2004
    Posts
    1,313
    Quote Originally Posted by omega1alpha View Post
    "The fact of the matter is that most poor performance scenarios for today’s GPUs are the fault of poorly coded games rather than a lack of processing horsepower." - HardewareCanucks

    I've been saying the same crap for years now. It's like smashing an ant with a sledgehammer. You know, don't fix the code, just add another card via sli and you'll be happy. Oh well, the 5800 series looks solid for the price and the power consumption is fantastic at those performance levels, but I really don't see a reason to upgrade if you have a last generation card and game at 1900x1200 or lower. For those of you that do, it's your money and I'm sure you'll enjoy the new toy...
    Totally agree about the game programming (and driver) thing. Slightly different algorithm, a few different instructions, or different order can make HUGE differences in performance. You need a BILLION transistors to get that 20% improvement with 5870. Slightly modified code can get 500% improvement.

    But, I'm getting worried about recent popularity of .NET and such... how it will affect game programming performance efficiency.

    When you have something like TNT2 (for those of you who remember) under the hood:
    - Cant use extra triangles, that will be too slow, cant use too many textures, not enough memory - I got it, we'll use SPRITES! Explosions or other special effects... we'll just draw some texture and "pretend" that's the explosion. Afterall, gamers have to use imagination.

    When you have X800:
    - Yay triangles everywhere. Oh but wait, gotta make sure to use registers efficiently. And have limits in shaders. Oh and gotta be careful to limit branches.

    When you have 8800GTX:
    - Shaders shaders everywhere. But, so few games written so late for DX10. Now you have the resources, but lack skilled programmers...

    R800, R900, R1000.
    - google "abient occlusion" or "photon mapping". Copy paste code. Done. Optimizations... nah.. its lunch time... besides if I make it too fast, nobody will buy new hardware

    24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
    1 GB OCZ Gold (='.'=) 240 2-2-2-5
    Giga-byte NF3 (")_(") K8NSC-939
    XFX 6800 16/6 NV5 @420/936, 1.33V

  23. #273
    Xtreme Addict
    Join Date
    Jan 2004
    Posts
    1,313
    Quote Originally Posted by AndrewZorn View Post
    unless games start requiring better graphics cards in the future

    you know
    later
    True that.
    Future games will certainly require better faster hardware.

    8800GTX owners were glad they prepared for Crysis.
    But, why buy expensive now and wait 1 year for game?

    In a year you can buy a '8800GT' DX11 card... which will just so happen to coincide with launch of Crysis 2 or Half Life 3 or whatever...

    Unless ofcourse you think Battleforge, an RTS, is the pinaccle of FPS gaming
    O_o

    24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
    1 GB OCZ Gold (='.'=) 240 2-2-2-5
    Giga-byte NF3 (")_(") K8NSC-939
    XFX 6800 16/6 NV5 @420/936, 1.33V

  24. #274
    Xtreme Guru
    Join Date
    Aug 2007
    Posts
    3,562
    Quote Originally Posted by Calmatory View Post
    Though, I guess it's quite broadly agreed that the companies
    decisions are to blame, not really the people doing the hard work. Bad happens, poor field to work on.
    True, but you get to one of those "Chicken / Egg" scenarios. What comes first: poor coding or publishers pushing too hard wich leads to poor coding? I tend towards blaming the publishers.

    Quote Originally Posted by saaya View Post

    and i think 2.4 3.2 and 4.0 would probably be enough, i suspect barely any scaling between 2.4 and 3.2... after you ran the tests 2.8 might make sense to see where it stops scaling... from what i remember even 2.4 to 2.8 should be barely a diference... but then again, high res might be interesting... and CF and SLI need more cpu power... def interesting!
    For me it is more about seeing where the law of diminishing returns starts taking effect.

    Quote Originally Posted by flippin_waffles View Post
    . I've completely lost interest in seeing how big of a number I can get, and the number of people in my category seems to be growing fast. I'd rather see how new hardware would affect ME not live vicariously through a select few on the internet. Or come up with something new, be innovative. [H] tried, but they failed miserably, but at least they tried. Sticking to the same method that's been used in reviews for the last decade is failing.
    People are visual creatures and they love pretty looking charts. Seriously, suggest something and I am all ears since trying to put a positive spin on the difference between 100 and 200 FPS is driving me to distraction...

  25. #275
    Xtreme Mentor
    Join Date
    Feb 2007
    Location
    Oxford, England
    Posts
    3,433
    Quote Originally Posted by clayton View Post
    Techpowerup's review shows that PCI-E x4, x8, x16 doesn't have much difference in performance.
    to be fair.. a pcie x4 (gen2) is actually a pcie 8x (gen1) so even on older mobo's with support your only losing between 1~5% depending on the res and if its 8x and not 16x
    "Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
    //James

Page 11 of 22 FirstFirst ... 89101112131421 ... LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •