Yes, I should have said bit:rofl:
Printable View
dude you could really stop thinking that 512mb memory is useless for 1920x1200. cause i dont play my games at anything but that. i just got done grid and mass effect. both at full res. on a 2900xt and BE 5000+ both at stock.
i will agree that 512 will soon be no where near enough within 9-12 months, but until then its plenty for any good game. good programmers design games knowing what 90% of the world will be using and aim for it. so im sure that many games in the next 2 years will still run great on my setup and keep the visual quality ive been using for the last year.
Simple test: someone play Oblivion with Qarl's Texture Pack 3 and tell us if the 4850 chokes big with the texture pack enabled. If not, then 1GB of memory just isn't as necessary w/ ATI cards. I thought I saw a test where QPT3 is on and the 4850 w/ 8xAA actually matched the GTX280 in Oblivion, but it was a review that was considered controversial.
And the bigger thing is that you CANNOT compare how memory is used across different architectures. The G80 GTX and Ultra seem to suffer less than G92 either because G92 has lower memory bandwidth, actual memory, or other possible features that bottleneck it. However, the 3800's did NOT suffer from increased resolution as much as G92 did despite being on the same 256-bit bus. This is because how an architecture actually uses the bandwidth and information will differ from architecture to architecture. And so far, the 4850's which have *less* bandwidth than the 3870's seem to suffer far less at high resolution, AA, AF, etc. than the 3800's also tells me that the architecture might be able to handle it.
And 512-bit bus is useless w/ GDDR5. 256-bit + 3.6GHz GDDR5 = 512-bit + 1.8GHz GDDR3. Given that memory bus is a very large part of the die + transistor count, I bet Nvidia would rather have had higher yields (which cost a lot more than memory chips both in the short term and long term) with a slightly smaller bus and adopt GDDR5
video memory usage from what i've seen varies greatly on the game and texture settings used less so the resolution and level of aa.
Just open up rivatuner with the video memory monitor plug in. Crysis doesnt go over 500 mb on high from what ive seen, very high probably goes over 512. Call of duty doesn't go over 500 ever, even 1920x1080 with 16aa. Oblivion with high res texture mods goes over 600 mb, same as company of heroes with texture detail set to ultra.
Most games don't go over 512 however the games that do are very pretty.
Not meaning to thread jack (yet again sorry) but I am finding the HD4870 an extremely mouth watering card, I just want to see the UK retail prices and some reviews before I decide what to purchase.
I would go for a 1GB version over a 512MB version because my G92GTS runs out of VRAM in a few games @ high res and also the extra memory will be useful because I could use Triple Buffering in games which were not VRAM hungry for ultimate smoothness :up:
John
I know I personally would perfer a 1Gig card. I know 512Meg works, and bearly at that with some games without a major hit.
Instance was Oblivion back in the day between 1800XL with 256Megs... running 320Megs normal or 450Megs with AA with zero slow downs yet there was a visual difference.
Guess its more of a safety net feature that I'd perfer to have so I don't have X amount of hardware that could do it but can't because I skimped a little on ram for the card to process the amount of information.
As I just said, memory usage can not be compared between different architectures. A G92GTS is very inefficient in memory usage compared to a 3870 for example. Here, if you truly need some more proof:
http://techreport.com/articles.x/14654/10
I have no doubt that 1GB will become necessary in about 1 year from now, but until then 512MB will probably be enough on these cards.
These fake/old?
http://img410.imageshack.us/img410/3...2498us4gp0.png
sorry, i've been out of the game for a few days :confused:
Honestly is physX really that tempting? Nv adding PhysX support to their cards is really about as useful as adding serial ports and ISA slots to new motherboards. Whats the point if no one is utilizing it? With Intels purchase of havok, as well as its wide spread use (especially compared to PhysX) and top it off with AMD also joining the havok camp, does PhysX really matter? Last I read Ageia's PPU did nothing for havok based physics, but I could be wrong. :shrug:
Right now PhysX support seems to just be +3Dmark points.
V_rr that price looks low for a 1gb from what people have been speculating. That about right on par with the 512mb in price isnt it?
What are people expecting the 1gb variant to be? I orignally thought the 4870 was only going to be 1gb and at $350. Now it seems the 4870 may be only 300$. I'm guessing that would put the 1gb at $350? /me hopes.
Lawl a 12W fan... there's floorstanding house blower fans weaker then that...
That chart looks to be a bit fake Atreides
People that have used the card, say it is very fast, but no so fast that it will be beating a Gx2 in Crysis.
8800 gtx , my 9800 gx2 would choke on company of heroes but would play oblivion fine even crysis on very high most of the time.
I could see if the hardware was utilized efficiently with things like culling and unloading textures that aren't in sight 512mb could be feasible. Hell it looks like the 4850 isnt having any problems with 512 mb vram.
:worship: 256bit 512MB GDDR5 = no bottleneck, supposedly-> ATI
:worship: 512bit 1GB GDDR3 = no bottleneck, supposedly-> NV
so claim the 'experts' :hehe:
yeah right, i'm an expert, and my uncle is donald duck.
hmmm so we can expect 9800gtx +++++ zomg edition with havoc + ageia support :ROTF:
The pics in the OP look great! This card is shaping up nicely. I too would prefer a 1GB card, hopefully they will be available near launch.
Now for the independant benchmarks...
I hope it's worth just as much as the 4850 is.
everyones going on an on and on about the damn radeons. THey are good as is at their price point. period.
And why would you stick a gig of gddr5 on a 256bit bus? thats like filling a pool with a garden hose it seems.
Im hoping 3 phase regulation is enough for the core
Grr, i can't wait for the damn reviews already. Was it not supposed to be Monday? Or the 25th?
A pool which in the end is filled either way. Its just down to each sides respective design decision. ATI found it cheaper to stick with a 256bit bus ( a larger bus would just raise R&D as well as chip costs ) and opt for the GDDR5 while Nvidia decided to go for a wider bus ( which is more expensive in both die size and cost ) but use the cheaper, more plentiful GDDR3. Both methods achieve the same end goal however but I still think ATIs method is the better one for the consumer, as long as GDDR5 supplies don't run thin that is. Both designs should offer considerable bandwidth for some time to come.
OK, the Chinese review done, i am translating it, put it to the en. site.
http://translate.google.com/translat...sl=zh-CN&tl=en
if you dont want to wait, just follow the link but it will be really slow for oversea readers
awesome, it beats the 260 almost across the board for a smaller pricetag!
I wonder if the 1gb version will compare more favourably to the 280? ;)
:eek: great results
Finally, ati is back in the game!
I've always been green but I just can't help but feel happy for ati :p:
I'm having a hard time buying some of these results, with the rig in my sig I get 91fps average in COD4 and a 9800GX2 gets 114 eh?
I hope the HD4870 1GB is a fair bit cheaper (£80) than the GT260GTX card it is tempting to crossfire them I bet in Crossfire it will beat the GT280GTX!
John
Great success! 4870 CF here i come :) Little less performance then GTX 280 at almost half the price :up: I'm surprised it even beat the 280 at 1900 res with AA turned to the max :up::up::up:
Holy smokes.
Me <3 4870
Crank up the AA and enjoy the show.
The (lack of) 8xAA hit from 4xAA is pretty amazing for the 4800's so far. Weird to see how the ATI architecture has matched the 4xAA hit versus the GTX 200's but seems to beat it soundly with 8xAA applied.
R700 please
very solid performance, nvidias gonna have to drop some prices.
I wonder how much the 280 will cost in about one month. You can almost get 2 x 4870's for the price of a single GTX 280 ffs. I'm glad ATI finally slapped those lazy asses around.
Holy crap, I don't believe this!!!!
4870 DEFEATS OR TIES GTX 260 ACROSS THE BOARD!!!
:up::ROTF::rofl::eek::shocked:
:party: :party2: :party3:
92C load on the 4870 W.T.F
Also uber high wattage usage too!
Only thing that sucks is the cooling. I'm pretty sure it's caused by the fan speed they use. I think i saw like 30% fan speed at full load on the 4870... temps are normal for that setting IMO. But... if you turn the fan higher then you have to wear headphones. Swaping the cooling setup or bearing more noise is ok in my book for the performance / price the card offers.
Anyone know a cooling solution that would work for Crossfire? I don't think 2 x HR-03-GT's would fit.
Yeah but com'on, hardly anyone here uses the stock cooling for more than a month ;)
For CF, just do the reverse mount on the upper card.
temps look just like the 3870's when i got them but after flashing bios they came down to about 65c
I don't think they used a proper GPU loading tool (fur wtf?). Their relative wattage results for well-known products (GX2 for example) are completely different than other sites, eg Anandtech.
http://www.anandtech.com/video/showdoc.aspx?i=3334&p=9
In the Anandtech review, there was a large difference between the X2 and the single GPU products, for example. There's almost no difference in the EXPreview test.
No it's not cuz of the cooling design. It uses almost the same wattage as the GTX280. ATi claims only 160w full load vs 236w with the GTX280 which is clearly a LIE.
pfft this is xtreme systems... volt mods and water cooling for everyone :P
full cover might not work with the 4870s but it looks like mounting holes are the same otherwise..
Why do you say that full cover might not work:confused:?
Only cause the RAM is arranged slightly differently but actually looking at it, the cover might still work anyways (since part of it is angled).
Look at that AA rock and roll!
Your predictions were pretty spot on... except you said R700 and not RV770! R700 is yet to come..
Now is the time for the smart ones among us to be buying AMD shares. AMD now owns the graphics market for this generation and as such is extremely under valued.
I read the Chinese version of the review, the card is looking good. If they are selling it for $299 it'll be the best performance/value balanced card out there right now.
Like this?
http://xoqolatl.ovh.org/files/cram1.jpg
I made the mistake of passing up AMD shares @ 3.50 before (they eventually rose to 40). From the look of things AMD is going to be in a stronger position this generation than NVIDIA was in the last one. That alone doubles, perhaps triples AMD's current valuation. I missed the gravy train once when I should have bought, I'd feel foolish to do so again.
wow good call. according to them gtx280 is only 13% faster than HD4870. (but almost 2x price)
Very nice success from AMD. :up:
Competition is good because it will push Nvidia even farther to extend the performance lead.
I would be cautious about buying AMD stock though they have huge problems competing against Intel in the CPU market that are not going away. ;)
Its just going to get worse for AMD after Nehalem imho.
What good does it do to be right on a total guess though? Unless you logically deduced these numbers, you could be right 20 times, it doesn't make you any more valuable or intelligent about the technology driving performance.
And I'm not sure what you're proving as you mentioned R700. The only thing I actually see you were right on was the GT200 performance over G92/G80.
i meant RV 770. was bad typing(I'm transitioning to dvorak from qwerty). note how there is no RV700.
and I logically deducted the numbers from rumored specs and the like.
next prediction 4870x2 faster than GTX 280 gx2 given that the rumors of it being driven at a hardware(as opposed to software) level are correct.
think off chip memory controller or some other form of IO coordinater which makes crossfire done at a hardware level instead of a software level. again predicting the actual R700 to be around 70% faster than GT200 should this come true.
in my head... basically involved checking the number of shaders rumored and also assuming clock speed increases... net result, was wrong on the number of shaders(RV770 side) and clocks(was expecting 10-20% more clocks 50% more shaders and assumed around a 10% boost from the faster mem) but end numbers were right due to architectural improvements.
similar nv side expected lower clocks and 240/128 = 1.875 and 10% lower clocks and it's right around 1.7
all in all lots of speculation.
GT280 is 13% faster than a 4870 512mb. When we see the 1gb 4870 that percentage will reduce even more when playing at high resolutions.
Can't wait to have one :yepp:
Even if they pulled off that incredibly complex feat (which they didn't) with the unified memory, there's no reason that that would imply greater performance over regular crossfire.. it just improves compatibility a great deal. Also, you're still going to have things like microstutter.
the whole microstuttering issue will probably be imperceiveable though.
going to an on-card bridge chip takes A LOT less time than going to the system's northbridge or south bridge. remember CF has traditionally been implemented via software, so there is A LOT more overhead.
CF and SLI usually boost performance by, what 50-70% right? I'd think that if done right the boost here would be more like 90% or so.
Thats just insane! almost as fast as the GTX280 for half the price :eek2:
ATi--> :cord: <--Nvidia
Its almost ironic, seeing as Nv is following in the footsteps 3DFX in the final days. You know, using pretty much the same chip and charging alot for it just because it had a hit at one time. Just because the people flocked to the 8800GTX at $700 dont mean they will to the GTX280 with competition like this!
Not to say that the company is in any danger of going out of business, tho between Nehelam, larabee(maybe) and this... its gonna get rocky
The boosts that multi-gpu delivers are due to increased processing power.
Do you know the bottlenecks of multi-gpu? I don't think you do. Because time/latency, the northbridge, and the southbridge certainly aren't the bottlenecks for SLI or CrossFire.
The cards do batched processing and present their frames interleaved. The things that hurt scaling are when someone makes 1 frame they render depend on the previous one somehow, which slows things down dramatically because you have to transfer the contents of one frame to the other card. While this situation would be improved with unified memory, it would not increase performance over any game that is optimally performing with multi-gpu today. The best case scenario is you get more applications which can be naive about how they code their games and still get scaling. You get compatibility and cost savings (need half the memory that you do today), not absolute speed increases.
Trust me, you're making results up based off of bottlenecks that you don't understand.
Although I had several server error messages, (I bet your hit count was out of sight!) I finally and slowly got to see most of your review.
Oh, by the way that was a good review. Let's hope that AMD/ATI decide they need to hang some clouds over the GTX 260 release tomorrow.
I don't know where I'd be with out the Google translator.
On the last page at the bottom is this...
Has anybody said no!Quote:
Do you think Radeon HD 4800 series will be successful? » Be on your comments!
4870 looks very tasty, might switch out my 8800GTX for one if more numbers follow same pattern ;)
uhh well...
traditionally memory latency has been considered negligible.
if you have two GPUs and they share a central memory controller off die, then that will add some latency, but in theory you could get pretty close scaling.
the bottlenecks in software mediated multi GPU solutions are are obvious... the software. each game has to be coded for in the drivers to enable a multi GPU configuration and I don't know if you've noticed but as of late programmers suck. when you add more and more human interaction into the mix the product will be less and less efficient if you haven't noticed.
and I don't know if you realized it or not, but "While this situation would be improved with unified memory, it would not increase performance over any game that is optimally performing with multi-gpu today" and as it is most games are not optimally performing... how many instances are there where a multi GPU solution is slower than a single GPU solution?
and as for the 90% figure, I already said it was pure speculation.
anyone noticed that power consumption of the 4870?
6W less then a 280GTX :shock:
...nelson time
good to see ati on a comeback.
http://img.photobucket.com/albums/v5...lson-muntz.gif
http://xoqolatl.ovh.org/files/cram1.jpg
is that the exact same sink mounted 2 diffrent ways? if so, it's ..conveniently convenient :|
didnt know u could mount them like that.
http://translate.google.com/translat...sl=zh-CN&tl=en
That's a margin that really shows a new gen. Grats AMD.
ouch for NVIDIA, well i hope that there is still alot room for improvements in GT200 if we talking about driver optimisation, otherwise nvidia got gangbanged :)
wow! Beats the GTX260 in a lot of games except Lost Planet and is roughly 25% faster than the 4850.
Lets hope Rivatuner / ATITool gets updated soon so we can see how a 4850 @ 750MHz core stacks up against the 4870. It'll be interesting to see how much of its gains are from memory bandwidth alone rather than both the core speed increase
Too bad current drivers don't support power play yet
we might see entirely different results :yepp:
I just purchased 10,000 AMD shares....CLIIIIIIMB! :D
Next level will be how good can this baby overclock!! Am still waiting for more reviews but i already know what my next card will be
I rememebr reading this was $299 still true?
So did nvidia get to open that can of whoop-ass or what? :down:
No, they didn't. NVidia is getting owned on pretty much every level at the moment:
Both at $200, a 4850 is considerably faster than a 9800GTX. Maybe the 9800GTX+ can match a 4850 but then it's about $240, which is significantly more expensive. At $240 most users would go with a HIS IceQ3 overclocked 4850 instead of a 9800GTX+, because it'd be faster.
At $300 nVidia has nothing. But ATI has its 4870's, which is again considerably faster than nVidia's $400 GTX 260's. Which makes the $240 GTX+ even less significant because most people would spend $60 more and get a 4870 for a superb leap in performance instead of spending $240 on a G92(b).
At $400 nVidia has GTX 260 which has been pretty much rendered worthless with the above figures of 4870's performance.
At $600 nVidia has the GTX 280 which is slower than nVidia's own EOL'ed $500 9800GX2. Yeah, SLI problems, microstuttering etc., but most users don't know about that and look only at performance numbers.
Then will come the R700 which will be around $500-600 and will kick 9800GX2's and GTX 280's ass by a cool margin. And with ATI's rumored new multi-GPU-on-a-single-card technology it might not even have the previous mGPU solutions' problems.
AAAnd, there's the 4850 Crossfire option for $400 which kicks solid ass also.
Actually I can't justify buying a nVidia product right now, especially in the $200-350 range; which happens to be the deciding factor when it comes to sales and profit.
lol... :D :ROTF:
The translation is done apparently so here it is in english.
http://en.expreview.com/2008/06/24/f...0-and-hd-4850/