when 5850 hits 200$ i might have to make the jump, cant wait to see if they make 2GB versions or what it will be like.
when 5850 hits 200$ i might have to make the jump, cant wait to see if they make 2GB versions or what it will be like.
What the use of putting up a subjective review where the top 3 cards nicely smash into a CPU bottleneck again and again? The reason reviewers push their chips to high clocks is to get a differentiation between the cards' scores. Due to the nature of the majority of today's games, many high end cards will bottleneck at 1680 and even 1920 resolution in some cases even with an i7 @ 4Ghz let alone a C2Q.
Considering ATI/AMD usually gain quite a bit over the first 3-6 months with driver tweaks, especially for the X2/Crossfire implementation, this looks damn good!
You get 4870 X2 / GTX 295 level performance with less power draw from a single card for money about equal to what the X2 goes for in most places (and about $100 less than the 295)
I'm a bit skeptical about how a single 4870 will drive 6 displays well (keyword: well) but I guess I'll just have to wait & see.
1. crysis and crysis warhead are the same, its like calling hl2 ep1 and ep2 diferent games :P
2. what settings? at 1680x1050 and 1920x1200 4aa 16af a 5850 has about the same min fps as a gtx285 and 4890, only higher av fps... but even on the 5850 its unplayable imo... and even at 1280x1024 where the 5850 pulls ahead i wouldnt really call 28min and 40av fps playable... both anandtech and xbitlabs get about the same numbers there...
what you really want for crysis is a 5870 or gtx295... ideally gtx285SLI or 5870CF i guess... but its not worth it, its the only game that needs this and its not like its such a great game giving you weeks of funtime...
no, the games were out before the launch... check it out, i was surprised myself... there are def 2 cases or so where a game was really slow and then 8.8 gave a big boost and probably fixed something and then there were barely improvements, but q couple of games saw 10% boosts from 8.7 to i think up to 8.12 is what they tested?
yeh ok... those days are def over...
flippin_waffles, tpu tested with i7 at 3.8, at with 920@3.33 and xbit with 965@3.2, and hwcanucks tested with 920@4g iirc... so anandtech and xbitlabs should be fine for you... there was a link to some dutch site that tested on an amd phenom2 as well, and there was almost no diference between 285, 4890, 295 and 5870 in many games, cause they were cpu bottlenecked... not sure what clocks you need with an i7 to see diferences between the cards, but from AT and Xbit we can see that at 3.2 theres already a notable diference...
yes, theres def something wrong... everything above 5870 seems too close to each other and doesnt really scale up... the only way to get this is if the cpu is limiting... at tested with their cpu at 3.33 and they see a bigger scaling above 5870... so idk what cpus other reviews tested with to draw the overall scores down...
hmmm that heatsink looks nice design wise, but cooling perf?
probably better than the restricted airflow stock design, but doesnt look great... especially the blue green pcb is... yuck...
whos seriously into games and runs a c2d or c2q below 3ghz... pff come onand i7 isnt really faster in games compared to c2d/c2q clock for clock...
i know its a lot of work... but could you maybe do at least one or a few tests with lower cpu clocks? i think that would actually be a really nice thing to see and very helpful... cause people think card a and b are notably diferent but they might not be if they have a stock cpu at 2.5ghz (check the steam hw survey)
just a suggestion... would def be interesting to see if some cards scale more with faster cpus than others as well for example... so maybe some cards dont need a fast cpu and are a better upgrade for an old system than others... get what i mean?
You expected something which wasn't in the card. The only (known, and fact) changes were the "double RV790", so no possibility for efficiency improvements. Using the same architecture and doubling stuff does not increase the efficiency, despite the "More than twice as complex scheduler as before".
But yeah, if you're disappointed with nearly 2x perf. increase over the last gen, it's your thing afterall.
Let's wait for R900 and new(?) µArch, hopefully there will be changes in the efficiency.
What I love the look of that cooler while I wish they would of made the PCB straight up black you cant always get what you want, but that card looks sick.
~ Little Slice of Heaven ~
Lian Li PC-A05NB w/ Gentle Typhoons
Core i7 860 @ 3gHz w/ Thor's Hammer
eVGA P55 SLI
8GB RAM
Gigabyte 7970
Corsair HX850
OZC Vertex3 SSD 240GB
~~~~~~~~~~~~
Another thing i'd like to get off my mind, is why the heck do they design such fancy coolers, and then make the card so you either have to stand on your head to see it once they are in your computer, or hang your box from the cieling. Why can't cards be designed so the bloody cooler is on top? Surely it can't be that difficult to flip everything around, and there's plenty of PCI-E slots to accomodate such a thing if there's no room between the top slot and CPU cooler!!![]()
but even at the bottleneck we then know what to expect, or if its worth it or notif its say 60fps at 1680x1050 I would be like woah... but then realise im not running a beasty i7 under the hood and probably get less then that
not having a go I see why they have to kinda do it, but still nice to say have a review with more ... realistic scores to what more users would expect?![]()
"Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
//James
^^^^
Yeah what he says.
Ever since 4870x2 launched on August 12, 2008 - there has been little motivation to upgrade. As appealing as dual 30" 2560x1600 may be, or 8xAA.. its the very definition of excessive. Heck most "chumps" with 22-24" LCD or 50" HDTV are stuck at 1920x1080.. making 25x16 results irrelevant.
Look at the 5870 and 5870 CF benchmarks:
- Half the games are CPU limited.. all Dual-GPU/CF converge at some crazy high 150-300fps limit.
- Another big portion like Fallout3, FarCry2, STALKER, RE5, Batman, and especially HAWX show HD 5870 bandwidth starved and falling far behind 4870x2.
- Finally, very shader intensive Crysis and others show HD 5870 taking clear lead.
But whats really the relevance if HD 5870 is 10% or 50% faster than GTX285 at 2560x1600 8xAA. 40 fps vs 30fps looks impressive, but neither is playabe. And even if it was, AND you had such a monitor, would $$$ justify the higher resolution.
Both put up similar 70-100 fps avg at 19x10 4xAA in virtually every game (Crysis excluded ofcourse).
Bottom line: regardless of how high HD 5870, or GT300 score, until users upgrade from existing 1080p displays, there's little benefit to upgrading existing GTX285/GTX275/HD4890/HD4870.
... until ofcourse next must have game like Half Life 3 or Doom4 requires DX11![]()
24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
1 GB OCZ Gold (='.'=) 240 2-2-2-5
Giga-byte NF3 (")_(") K8NSC-939
XFX 6800 16/6 NV5 @420/936, 1.33V
Again I bring up RV770. The only known feature about that chip (just before and during launch) was its 2.5x increase in ALU count. Impressive in its own right, however it brought more than a 2.5x performance increase over R600 and RV670.
Also I was comparing the jump from R700 (2xRV770) not 2xRV790. In this light (2xRV790), Cypress looks even more unimpressive, with an average 1.6x increase over RV790 (they share the same core clocks, so it's quite fair to compare the two architectures).
Somehow, a 40% theoretical performance advantage was lost during the 100% increase in specifications count.
As I said in my original drivers, it could just be the beta drivers not showing Cypress' fullest potential. But given that R600, RV770 and Cypress share the same underlying base architecture, I don't think there's too much to be squeezed out of new drivers. Still, I could be surprised...
Originally Posted by flippin_waffles on Intel's 32nm process and new process nodes
PCI standard. In ISA cards the components are on top of the card.
Why do you bother looking at the card in the first place?
There are bottlenecks in the system. For example the CPU and PCI-E bandwidth. PCI-E 2.0 x16 is showing it's limits, when the PCI-E frequency yields gains in FPS.
Basically when the theoretical performance doubles(as it has been with RV790 -> RV870), the practical performance is determined by the theoretical max-the bottlenecks(CPU, PCI-E, drivers to some extent). I'm quite sure that faster CPU, more PCI-E bandwidth and 1-3 releases more mature drivers will yield notable improvements for RV870. Until then, it comes down to sticking with what we have got now and wishing for the best. As far as I know, there has been some issues with load balancing between SPs, and that not all the potential performance can be used from the cards, due to the nature of the SP clusters.
Hopefully the prices of the cards will go down for christmas, even though I won't be getting one anyway.
Last edited by Calmatory; 09-23-2009 at 08:51 AM.
"The fact of the matter is that most poor performance scenarios for today’s GPUs are the fault of poorly coded games rather than a lack of processing horsepower." - HardewareCanucks
I've been saying the same crap for years now. It's like smashing an ant with a sledgehammer. You know, don't fix the code, just add another card via sli and you'll be happy. Oh well, the 5800 series looks solid for the price and the power consumption is fantastic at those performance levels, but I really don't see a reason to upgrade if you have a last generation card and game at 1900x1200 or lower. For those of you that do, it's your money and I'm sure you'll enjoy the new toy...
TANDY PC
Intel 486 SX 25
4 MB RAM
Trident 512K SVGA
120 MB Seagate HD
14 Inch CTX CRT Monitor
14.4K Modem (too slow for :banana::banana::banana::banana:)
Radio Shack 2.0 Speakers (6V Battery Operated)
OS: MS DOS 6.2
Games: X-wing, Wing Commander, Veil of Darkness, Kings Quest, Zork
It's nice to see people whining at programmers. The guys work with limited resources(Financial and time), usually over 60 hour weeks(esp. during the crunch time) doing what they can. Possibly one of the hardest professions to master, poorly paid. And yeah, "code better" is what they get, and seemingly deserve from the people they do the work for.
Though, I guess it's quite broadly agreed that the companies
decisions are to blame, not really the people doing the hard work. Bad happens, poor field to work on.
Edit: Oh meh, just debunked it all.
Last edited by Calmatory; 09-23-2009 at 09:03 AM.
Techpowerup's review shows that PCI-E x4, x8, x16 doesn't have much difference in performance.
All systems sold. Will be back after Sandy Bridge!
cool!
dont wait for the 5850x2 tho... who knows when itll come, as soon as you can get 2 5850s just run them in CF
and i think 2.4 3.2 and 4.0 would probably be enough, i suspect barely any scaling between 2.4 and 3.2... after you ran the tests 2.8 might make sense to see where it stops scaling... from what i remember even 2.4 to 2.8 should be barely a diference... but then again, high res might be interesting... and CF and SLI need more cpu power... def interesting!![]()
Well, I suppose if reviewers like SKYMTL have this attitude that it's pointless to show hardware without showing it's maximum potential, all while ignoring the vast majority of consumers who purchase this card, why don't they show the true potential by running 3 and 6 monitor setups, to really maximize the cards potential. I've said before, the only reason I would even think about upgrading my GPU is for the multimonitor support. I've completely lost interest in seeing how big of a number I can get, and the number of people in my category seems to be growing fast. I'd rather see how new hardware would affect ME not live vicariously through a select few on the internet. Or come up with something new, be innovative. [H] tried, but they failed miserably, but at least they tried. Sticking to the same method that's been used in reviews for the last decade is failing.
[edit on SKYMTL's other response]
Good stuff, now throw in some AMD processors, Intel's last generation processors, and a few dual cores and now we're talking.
Last edited by flippin_waffles; 09-23-2009 at 09:07 AM.
i7 2700k 4.60ghz -- Z68XP-UD4 F6F -- Ripjaws 2x4gb 1600mhz -- 560 Ti 448 stock!? -- Liquid Cooling Apogee XT -- Claro+ ATH-M50s -- U2711 2560x1440
Majestouch 87 Blue -- Choc Mini Brown -- Poker Red -- MX11900 -- G9
Totally agree about the game programming (and driver) thing. Slightly different algorithm, a few different instructions, or different order can make HUGE differences in performance. You need a BILLION transistors to get that 20% improvement with 5870. Slightly modified code can get 500% improvement.
But, I'm getting worried about recent popularity of .NET and such... how it will affect game programming performance efficiency.
When you have something like TNT2 (for those of you who remember) under the hood:
- Cant use extra triangles, that will be too slow, cant use too many textures, not enough memory - I got it, we'll use SPRITES! Explosions or other special effects... we'll just draw some texture and "pretend" that's the explosion. Afterall, gamers have to use imagination.
When you have X800:
- Yay triangles everywhere. Oh but wait, gotta make sure to use registers efficiently. And have limits in shaders. Oh and gotta be careful to limit branches.
When you have 8800GTX:
- Shaders shaders everywhere. But, so few games written so late for DX10. Now you have the resources, but lack skilled programmers...
R800, R900, R1000.
- google "abient occlusion" or "photon mapping". Copy paste code. Done. Optimizations... nah.. its lunch time... besides if I make it too fast, nobody will buy new hardware![]()
24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
1 GB OCZ Gold (='.'=) 240 2-2-2-5
Giga-byte NF3 (")_(") K8NSC-939
XFX 6800 16/6 NV5 @420/936, 1.33V
True that.
Future games will certainly require better faster hardware.
8800GTX owners were glad they prepared for Crysis.
But, why buy expensive now and wait 1 year for game?
In a year you can buy a '8800GT' DX11 card... which will just so happen to coincide with launch of Crysis 2 or Half Life 3 or whatever...
Unless ofcourse you think Battleforge, an RTS, is the pinaccle of FPS gaming
O_o
24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
1 GB OCZ Gold (='.'=) 240 2-2-2-5
Giga-byte NF3 (")_(") K8NSC-939
XFX 6800 16/6 NV5 @420/936, 1.33V
True, but you get to one of those "Chicken / Egg" scenarios. What comes first: poor coding or publishers pushing too hard wich leads to poor coding? I tend towards blaming the publishers.
For me it is more about seeing where the law of diminishing returns starts taking effect.
People are visual creatures and they love pretty looking charts. Seriously, suggest something and I am all ears since trying to put a positive spin on the difference between 100 and 200 FPS is driving me to distraction...
"Cast off your fear. Look forward. Never stand still, retreat and you will age. Hesitate and you will die. SHOUT! My name is…"
//James
Bookmarks