R700 is a single card.
Printable View
Naff off ceevee, you can't even read. Why are you trying to compare a Pxxx score with a Xxxx score?
Most users are concerned with price, and ATI looks to have the best performance per unit for whichever currency you want to choose. They also look to have a very fast product when combining cards along with low power consumption.
Back on topic, PC Perspective have posted a quick performance preview of the 4850:
http://www.pcper.com/article.php?aid=579
GPUz shows 10Gpixels, 20Gtexels and ~64Gb of bandwidth.
Any news on the 4870 ? Release date ? Benches ?
If it performs that well in real games and crossfire is actually consistent across different applications, and it is actually released in a timely fashion I'll be the first to make the switch to AMD. ;)
Until then I remain skeptical. They have a history of high 3dMark scores and TERRIBLE crossfire performance in many games.
My last crossfire rig was 2x x1950xtx which was great for benching but offered very little performance improvement in most games + lots of stuttering and artifacting.
You are right about that terrible history part, but as time moves on things change. Who knows, we might even have superior NVIDIA chipsets in about a year time. CrossFire has been the best multi-card solution since they "launched" "CrossFire X", that's just what the majority has been yelling and I don't have any first hand experience with any of the platforms so don't quote me on that one :p:.
About your apple to apple....ATi's R700 (probably named 4870X2) is supposed to compete with GTX280. But you are right, it will be 2 months late.
Just a note, the 12K X scores shows 4 GPUs in the GPU-Z :p:
2 4870X2 cards. And the CPU seems to be a Core 2 of some sort, even tho AMD dont want to show it.
What's with the GDDR5 at 900MHz, and the subsequent crappy memory bandwidth? I thought the point of GDDR5 was to have a higher clock rate to compensate for the narrow bus.
Will they actually release it with such a clock rate? :confused:
Oh, and a funny reminder to the hype people. What happend to the "minicores", shared memory and other fancy tricks. 4870X2 is a 3870X2 copy.
Speculation and pipedreams from fanboys that cant see past the cloud are very different. I guess you feel hit by the minicore statement...
There was a billion reasons to why no minicores and 0 reasons for it. Yet some people took the 0...
Yes, but people seems to compare it directly now to a SINGLE 280GTX. As in 4 GPUs vs 1.
I would be happy to compare my rig to yours any day, then we will see who is pitiful. ;)
So many fanboys on this forum who act like they own 30,000 shares in one of these companies.
I own stock in Nvidia and Intel but I'm not a fanboy of either. If AMD can produce real performance I will be the first to make the switch, buy their products and give them credit. Back in the P4 days when AMD had the performance crown I happily used Opteron + ATI combination for my PCs.
2.5Ghz Phenom X4 9850 scores around 8400 on CPU score. They show ~12300. Phenom is around 300points per 100Mhz. Do the math.
Sorry there AMD shill, forgot its personal with you. Also feel free to show me a Phenom doing that CPU score on extreme.
Just gotta say,ceevee that Foxconn Blackops MB looks awesome,thats gonna be some kickass machine:shocked:
As far as i'm concerned,for the performance and price,the 4850 is a godsend for paupers who dont have lots of cash to spend on their machines.
Doesn't matter, lots of hardware with multiple units gets detected repeatedly. I mean look at a core 2 duo or core 2 quad in your device manager, it lists each processor 2 or 4 times but you sure as heck don't have 4 CPU's in your rig.
The slide also says bridge chip + next generation interconnect and I find it strange they would list both a brand new bridge chip + interconnect when if it were just CF on a board, they'd just say new bridge. So at this point, you can't conclude in any direction.
http://www.techpowerup.com/63335/AMD...ies_Today.html
Yes we have just made a decision to change something.
Here’s what we’re planning to do. Effective 3:00 PM EDT today we will be lifting the embargo preventing the publishing of ATI Radeon HD 4850 performance previews. These performance previews may include ATI Radeon HD 4850 benchmarks, photos of the ATI Radeon HD 4850 hardware, information printed on related packaging or photos of the packaging itself, information provided in the ATI Radeon HD 4800 series strategy deck (attached) and related pricing information.
All other ATI Radeon HD 4850 related information provided at the NDA press briefings continues to fall under the embargo until 12:01am EDT on June 25th, as does all information and benchmarks related to the ATI Radeon HD 4870.
Basically what’s happened is our partners have started selling the products and in some cases reviewers that we have and have not sampled have already bought and received the product and published stories. We’re not able to control the situation anymore so we need to give the media the ability to publish info.
Radeon HD 4850 Single & Crossfire Quick Test @VR-Zone
http://sg.vr-zone.com/articles/Radeo...Test/5897.html
HD 4870 for 399 US$ ? Not accoding this leaked slide pic. :rofl:
http://img37.picoodle.com/img/img37/...em_a8d927a.jpg
Who is the wishful thinker and a shill in this case ? :ROTF:
There is nothing wrong with AFR, there are however some problems with the way it's currently done and this mainly has to do with the communication between the GPUs en CPU(s). Better communication between the GPUs can probably also help in solving this problem, but my guess is that AFR is here to stay for a while.
As is the picture above 4870 is aimed at 300$ (+/-25$ based on integrator and components used).
was this posted earlier? tried searching but nothing came up... Amazon adds Diamond Radeon HD 4850 to its website
http://images.tweaktown.com/imageban...0-03a_full.png
cool looking cooler! :up:
One thing i think gets overlooked is that for CF you need:
- two cards
- a CF mboard
- A very quick CPU not to bottleneck the cards.
So end of the day it may seem cheaper but for the average user who wants great performance CF ends up being the same or slightly more than Nvidia single card.
I won't be getting anything (very happy with my GTX) but just saying.
Yeah but with the # of Intel chipset users out there, and the fact that the 975X, X38, P45, X48 all support CF and have either x8/x8 or x16/x16 configurations, anyone with a recent motherboard (and P45 will certainly be mainstream) will have CF capabilities, not to mention the 790 lineup for AMD processors.
Nvidia will only have their nForce chipsets for SLI
4870 = $299, and about 20% faster than the 4850 at stock. Ofcourse the real reason for the purchase is two six-pin connectors and no need for the voltmods to reach some nice oc speeds.
The 9800GTX+ vs 4870 should be an interesting battle.
Perkam
Well if the G92b continues on the G92's tendency to suck at higher resolutions and AA (look at how much the 9800GTX drops in AA in that chart I made compared to the G80 GTX and GTX260), then competing against the 4870 will be no contest once AA and stuff is on
Jsut an FYI...over 3ghz quad starts to introduce micro-stutter. Having a super-beefy cpu is not really neccesary. In fact it will make your experience worse. Only synthetic benches that actually push cpu requirements need a super-fast cpu. 3DMark vantage easily illustrates this..cpu speed has little effect on 3d scoring.
Now, to correct,
- two cards
- a CF mboard
- a decent cpu with high FSB is required.
No numbers from the 4870 yet ? :shrug:
Doesn't that mean a P-P-PPPPPPHENOM is desirable?
D:
becuase the cpu goes idle waiting for vgas..this "lapse" while cpu is idle, then loaded, leaves spurts of traffic over the pci-e bus, causing stutter as data from the cpu slows the data flowing in the card out to monitor.
Luka, just check out the slide pics in this very thread about microstutter...it's shown there too.
this is not the only cause of microstutter, but it's definately an issue.
Phenom isn't too bad...it could use 3.2-3.4ghz and a slightly faster bus. Core2 @ 3.0-3.2 400mhz FSB or higher is ideal.
Remember boys, I'ver got dual 3870x2's...I'm dealing with the issue on a daily basis. I've almost completely eliminated the problem...some apps are more prone to it than others though.
It's a problem with memory control...within the vga. vga gets data from cpu, it must buffer the data...no data, it works only on data available, and will use as much of the ringbus as possible.
Cpu sends some data down....it goes on the ringbus too...but now it interferes with rendering as the dispatch controller must deal with pixels in flight, and pixels data incoming.:shrug:
http://www.donanimhaber.com/resimler..._dh_fx57_1.jpg
http://www.donanimhaber.com/resimler...x57_test_2.jpg
@ http://forum.donanimhaber.com/m_24110474/tm.htm
512MB of GDDR5 ???! The GDDR5 version of the 4870 shouldn't had 1GB ?
Use the Upload, Luke !! Use the upload !! :p:
http://img210.imageshack.us/img210/5017/4870cs6.th.png
http://img210.imageshack.us/img210/2...2997ia7.th.png
Pictures courtesy: Donanimhaber
Perkam
not necessarily, that 1gb 4870 was just a rumor, I'm certain that vendors will make it an option though
yeah, I won't buy another vga until i can get 1gb per gpu. 2560x1600 demands it!
Well, the memory hasn't had a significant affect on the benchmarks thus far (20% increase is from 20% core clock increase). I'm assuming the memory speed is bottlenecked somehow, but I have 0 experience in GPU memory bottlenecks and what instigates them :p:
Eeither way, these will fly on LN2, run on air, and make breakfast at stock :) (Yes these will be hot).
@Cadaveca, there should be 1GB GDDR5 versions out later, and frankly, the GTX 260 not having 1GB does not escape it from the same criticism. I do, however, find your comment a tad juvenile. The increased speed on the GDDR5 modules does help in its ability to process textures faster than GDDR3 at the same memory size. That is why we have memory bandwith, and it does account for something :)
Perkam
Heh, got 30-inch monitor perkam? tried to run AA and AF with 2560x1600...not enough vram to cover all that @ 512mb, plus shader data, etc. lesser res, ok. DirectX tools make it quite apparant how much vram gets used for any app you'd like.
I mean, we could talk about how much mem a single frame of 2560x1600 takes up, vs 1680x1050(2560x1600 takes up 4x+ the data of 1280x800)...512MB is just not enough...it has NOTHING to do with speed, and everything to do with vram size! juvenile, or not, it's the truth!
Got E-peen, Cadaveca ? :)Quote:
Originally Posted by Cadaveca
When the prices for GDDR5 go down a bit, you will see 1GB cards. You can also crossfire two 4850s if you need 1GB that badly, and most people that have a $1000 to spend on a monitor usually do go xfire and sli, so again, your concern is a tad unfounded.
Perkam
Seriously why people keep comparing and blaming 1GB cards because they get 1 FPS less, it's not about the speed, it's about the size as cadaveca is saying! :mad:
How does the 4870 look at comparing to one 4850 or a couple in crossfire?
There were some tests showing 3d mark differance between 26 and 28% but in the last two hours I have viewed more than 20 reviews, so I can't find the link right now :). Found it :)Click
RV770 AIB Info Slides leak :P
http://img210.imageshack.us/img210/3199/aib1sh7.jpg
http://www.ati-forum.de/general/news...-the-partners/
So we need more efficient and wider bus-interfaces, so data can flow and not stutter.
Nehalem should bring that along with triple channel, shouldn't it?
But it also brings a problem, if my line of thinking is correct: internal ring-buses should also be wider, in order do deal with so much data, correct?
I mean, what advantage would be having so much bandwidth from CPU to GPU, if the GPU ring-bus isn't able to deal with so much info? Coming to think of that, it surely seems a cause to multi-gpu solutions failure, small ring buses on each gpu, do not allow them to create a fluid data path, does it?
so CF benefits more on higher FSB?
So, if i understood this correctly, 2 controllers on different GPUs will be able to access the same 32bit memory module, through a 16-bit path each, allowing them to share the same buffered info.
Sweet :yepp:
That slide has nothing to do with our problem, clamshell is for using double the chips not multiple controllers. Notice there is only one controller in both pictures.
no, one controller can address two moduels at once in clamshell. It means that one gpu could potentially write to another's framebuffer. AS noted in the pic above, clamshell only allows for doubling the buffer size. You could run into page conflicts when two memory controllers each address the same pcs of GDDR5.
Yes, got my epeen alright.
Um, Perkam, I've got 3870x2's...two of 'em...512mb framebuffer for each gpu, 1gb for each card, and it's not enough for 2560x1600.
Unfounded? lol. You look really out of place now, don't you? Remember, I'm the guy buying high-end parts on release...I'm the one that has those concerns.
You, on the other hand, buy entry-level parts, so I understand your perspective, but you should also understand mine, rather than trying to undermine it, and you should also face reality a bit here.:yepp:
I'm the ATI fanboi...I'm not knocking the products, merely highlighting my needs as an extreme enthusiast. Surely you can understand that?:stick:
Indeed they bloody well better have 1gb frame buffer 4870X2's out, at launch. Don't care about price, don't care about the heat, or the power usage. Having to switch to crossfire because of the wreck that was the 790i is a pain enough but if i'm going to be stuck with 512mb buffers... Not worth thinking about. :shakes:
Ok that pic is too hilarious lol!
NOw you may understand why R600 had 512-bit memory control. ATI foresaw the problem with so much data in flight, as well as issues in re-syncing the dispatch processor with the new data efficiently.
AMD stepped in and lopped off half of that bus, and stutter is much worse on RV670 than R620 because of it. There are other issues plaguing RV670(powerplay) though, so anything that exhibited behavior like stutter has been singled out now.
Of course, a faster bus may help with stutter, but more importantly, it will get data to the cards much quicker, and when dealing with 4 gpus...this is very important. It's(beating stutter) more about a proper balance to the workload, and with todays hardware, it's easy to end up having a bottleneck fixed, only to have another appear elsewhere!
Cheapest HD 4850 I see now is 150 Euro's
HD 4870 is on pre-order, cheapest I saw 269 Euro:D
I want to clear up some review sites showing awful results for the hd4850.
Lets take cod4 as an example.
The following all show the 4850 outperform the 9800gtx
computerbase.de - http://www.computerbase.de/artikel/h...call_of_duty_4
guru3d - http://www.guru3d.com/article/amd-at...-powercolor/11
techpowerup - http://www.techpowerup.com/reviews/P...HD_4850/6.html
Then these benchmarks show awful performance
pcgh - http://www.pcgameshardware.de/aid,64...k-Test/&page=5
pcper - http://www.pcper.com/article.php?aid...e=expert&pid=3 <--- using heavily oced nvidia cards
The PCGH review clealrly has something wrong with their test system and should be ignored. There are systematic differences throughout their benchmarks. Bioshock being the other game with terrible performance.
And even in the PCGH review, they use Qarls Texture Pack and at 1920 x 1200, 8xFSAA, 16xAF, they beat the GTX280 with a single lowly 4850:
http://www.pcgameshardware.de/aid,64...icle_id=648091
Somehow the 512MB card is beating the 1GB card even with the texture pack at those settings. Either something is really wrong w/ the GTX280 or the 4850 has some ridiculous power
And yeah, the PCPER article is showing the 8800GT OCX (700 core, 1728 shader) and the 9800GTX (BFG base 9800GTX is at 700 core) so those are all OC'd G92's and the 4850 still hangs with and even beats it, especially w/ AA on.
I don't think the PCGH article setup is bad, I mean it clearly shows that at times even the GTX280 somehow loses to the 4850, such as my Oblivion example.
The PCPER article should've clearly stated the 8800GT and 9800GTX were OC'd versions to eliminate confusion, but I guess they did a rush job to put the article up. So either way you look at it though, the 4850 hangs and even beats OC'd versions of the GTX.
yeah that PCPER article was a horrible review, if anything they should have oc'd the 4850 to 700mhz core if they were going to compare the card to gt and gtx oc cards. Samething that was said about the gtx 280, its a new card, give the drivers some time, look at what happened with the r600's performance, those last 4 days it went up by almost 20%
I think we'll be happy with the results in the end
One thing is clear in all those reviews: AA scaling is so much better in the 4850 compared to RV670. The perfomance hit is on par with NV cards, which is nice.
nah they wernt oced, at least to there test system specs:
but they used win vista for all testsQuote:
Testsystem und Konfiguration
CPU: Intel Core 2 Duo E8500 @ 3.600 MHz (400x9)
Board: Asus P5N-D (Nforce-750i-SLI-Chipsatz)
RAM: 4x 1.024 MiByte DDR2-800 (5-5-5-15)
OS: Windows Vista 64 Bit samt SP1
Driver:
• Forceware 177.34 (HQ)
• Catalyst 8.5 respektive 8.6 bei HD 4850 (AI def.)
VGA:
• Geforce GTX 280, 1.024 MiB GDDR3, 602/1.296/1.107 MHz
• Geforce 9800 GX2, 2x 512 MiB GDDR3, 600/1.512/1.000 MHz
• Geforce 8800 Ultra, 768 MiB GDDR3, 612/1.512/1.080 MHz
• Geforce 9800 GTX, 512 MiB GDDR3, 675/1.674/1.100 MHz
• HD 4850, 512 MiB GDDR3, 625/993 MHz (inklusive Crossfire)
• HD 3870 X2, 2x 512 MiB GDDR3, 825/901 MHz
• HD 3870, 512 MiB GDDR4, 776/1.125 MHz
http://www.pcgameshardware.de/aid,64...=648091&page=1
I said the PCPER article, where they used the BFG 8800GT OCX and BFG 9800GTX and didnt state those two cards were both OC'd.
The PCGH article didn't use OC'd cards. But i'm still confused as to how the 4850 straight up beat the GTX280 while using the Qarls Texture Pack at 1920 x 1200. I thought cards with < 700MB choke with that pack, but the 4850 is eating a 1 GB card at 8xAA no less.
You are completely right...however I have been watching GDDR5 since quimonda first released specs(in such a way that I know that up pic there is from a quimonda document)...I fully understand it's possible(I was saying yes months ago, I had a big part in the shared framebuffer rumour, when ATI guys were saying no), but I question only whether the pcb design will affect overall cost as much as it should...given the intended prices for such cards. I see the same cost as making 512-bit membus pcb...which gives me a $575 retail price for 4870x2's...I am hoping for less cost..$499...
Again, I'm the ATI fanboi, but i like to keep it real. If they have pulled it off as I hinted like months ago, they've got nVidia under thier boot. If they don't have shared framebuffer, i'll still buy if i get more than 512mb of framebuffer for each gpu.
The PCGH review is the only inconsistent review out there. Yes, they could be the only review with the correct figures, but im gonna take the most likely possibility, there was a problem in their test setup. When you get the 4850 barely outperforming the hd 3870 and even falling behin in the higher res cod4 benchmark, it should tell you something. Am i a fanboy? I dont really care but I think the 9800gtx+ will do better than the hd4850 as the reviews show us, at normal resolutions of 1280*1024 and 1680*1050, the 9800gtx at the moment is pretty much neck and neck.
Yes there custom time demo might be different, but why should the relative positions be so significantly different? It would only affect absolute figures. Look at the pcgh graphs, the 4850 is absolutely awful in some of the games.
The HD3870x2 also does abnormally well as well. Maybe somethign to do with ATI drivers being better optimised in that particular scenario.Quote:
The PCGH article didn't use OC'd cards. But i'm still confused as to how the 4850 straight up beat the GTX280 while using the Qarls Texture Pack at 1920 x 1200. I thought cards with < 700MB choke with that pack, but the 4850 is eating a 1 GB card at 8xAA no less.
I don't consider the competition. I have rigs from both makers, but really could care less about what nvidia offers in this respect. Two different companies with different goals leads to very different products, and different apps will be better on either system.
but playing the ATI fanboi, I won't talk nV stuff often.
August seems fair...if 4870 is really dealyed until july though...4807x2 may take longer as board partners are responsible for pcb design, no? they've not had the gpu's for that long yet...
No, $499,-
For instance:
http://www.tgdaily.com/content/view/37093/135/Quote:
When it will become available the 4870 X2 will hit the market for $499.
Wow, guys! These cards are really something! I love the peak power consumption. No need to upgrade my PSU!
well,when there will be any 4870 for sale ?
Well, GDDR5 can use different trace lengths to memory chips, which I'm sure you know. If the shared frame buffer rumor is true I'm guessing that the trace lengths between a memory chip and the two RV770s will need to be equal, but even then board design should probably be less complex and have less layers if the memory placement is good.
STOP THE PRESS
TAKE A LOOK AT THIS
http://www.tweaktown.com/articles/14...nce/index.html
so a lot of review are pure sh_t -.-
there's a bug here, I suppose
Well, that certainly helps explain things!
Yep, you are think as I am...trace length from one gpu to mem to next gpu might have to stay the same, or one gpu will always get "priority". I do forsee a way around this, but it's not good for cooling, imho, so who knows. It's a techinical challenge that should be shouted from the mountaintops if they pull it off...as they should have patents for such that would prevent nVidia from ever pulling off the same thing...making ATI king of dual gpus.
But since R300 ATI gpu's have been capable of multi-rendering(renderbeast), so they have far more experience in this field, with products in the marketplace far longer than NV has...so I will put nothing past them.
But let me say this much...R600 was reviewed as having UVD...even some reviews posted results of UVD(which were completely faked)...but no card actually was capable. Today this review is still posted in this fashion...which makes me wonder about AMD's rumour control...everything out in the public domain right now is just rumour, IMHO, and ones I helped start aren't gonna get much attention from me.
No, I was just poniting out that thier info need to be taken with a grain of salt, as does all other info.
I mean really, perfect example is GTX260/280 release prices...they were supposed to be what? And are actually how much more? It's just speculation in that article, as where the specs they listed, the prices, and everything else. That article is far too old to have any real info other than stuff purposely leaked to find holes in NDAs.
They even hype split clock domains...:rolleyes:
"Our sources"....in other words, the info was not from ATI/AMD.
WTF?! Why word it like that? How about something like: "The crossfire mode on the P45 seems to have some kind of problem". Oh wait. Is it because it would have negative implications for AMD and that might not be great for his shares? If he wanted to hide a crossfire problem with the P35/P45 chipset he should have just left that part out.Quote:
An intensive game like Crysis sees the cards get it at all resolutions, and quite significantly at that. The X48 is really able to stand out when compared to the P45.
Hmm P45 is PCIe v2.0 2x 8 which is equivalent in bandwidth to 2x 16 PCIe v1.1. Its interesting if these cards actually require the additional bandwidth of full PCIe v2.0 16x. If this is truly the case and not merely a driver deficiency, I'm glad I didn't wait for a P45 and stuck with x48. 4870/X2s in crossfire will really hate P45 if this is the case.
@ghost
A single 4850 will stuff run full speed in 1 slot so it won't be bottlenecked.
@cadaveca: I was just in the shower when I had a "Eureka!" moment.
Wait for it....
Wait for it....
Who said that both rv770s need to write to the framebuffer? ;)