Oh wow, ONE DX10 permutation is faster than DX9 in a sea of thousands. :rolleyes:
Printable View
Personally, I think DX10 was intentionally designed to be inefficient and 'broken' from the beginning. Why? Because Microsoft is evil and they'd rather have people buy Xboxes than PC games.
Think about the extra development time and costs required for DX10. If you make a new cutting edge 3D game for the PC, you'll need to tweak and provide support for both DX9 and DX10.
If you skip DX10, consumers will think that the engine is old and you lose sales and get negative press. If you make it DX10 ONLY, then that cuts off all the XP users and anyone without a DX10 card and you lose sales. The whole "DX10 is for Vista only" thing is a CLEAR indication that Microsoft doesn't care about benefiting the gaming industry as a whole, it's only looking to sell its products no matter how crappy and inefficient they are.
Sure, you can try to blame the GPU companies for making a bad DX10 GPU implementation but do you really think that would be the case after they've spent hundreds of millions of dollars developing the GPUs?? These aren't monkeys working in lab coats, they're professional ASIC engineers.
That is incredibly irrational and stupid, you would not design the one thing that sells your OS (Vista) as being crippled from the beginning. I think the main problem at the moment is the game developers, they are not accustomed to the new API which makes for inefficient coding. Which also means that it's not really their fault, it's just part of the learning curve (DX9.0 has been there for quite some time). You can also of course partially blame GPU drivers, but to what extent.....
4870 X2 scares me...960 Shader Processors 0_o...now if only they can increase onboard x-fire efficiency so that we can see true 2x performance.
However, I do have a problem with ATI laying on more and more shaders instead of making architectural changes (like Nvidia with the GT200), as ultimately, even if they go 45nm (ATI that is), they will start to see diminishing returns on extra shaders very quickly.
Also, as the lower end HD radeons end up on laptops, WTH are they doing giving them 40 shaders each ?? That's 10x less than the mainstream cards !!
Apparently the r700 will be an mcm, not just two rv770 dies on the same pcb, so that plus the shared memory bus should greatly improve xfire performance. But I totally agree, unless if games start being coded for 960 shaders with very little texture power, more than half the power won't be even be touched and we'll have another r600 on our hands
hm, ten months ago there was a romour R600 has actually 96 procs, 64 of them are working and 32 are locked
And here i see they turned them on finally :)
Interesting, but highly unlikely, if that was the case, they would have been unlocked with the rv670 and that would have became a high end product instead of performance. Only reason I can see for that not to happen would be because the r600's ringbus would have been needed for those 480 shaders and so that idea wasn't implemented, but I still find it highly unlikely. Not to mention even if they were locked ati would still need to implement them in the die, meaning the transistor count would be way higher, think about it, g92's transistor count is higher than that of the r600 but it only has 128 shaders
So going from a 4:1 ALU:TEX ratio to a 3:1 ALU:TEX ratio is bad?
AMD/ATi isn't only increasing the shaders to 96 but also doubling the TMU count. They should also be increasing the z per clock from 2x to 4x.
Not true at all... AliG covered it good enough.
This expansion was somewhat expected anyways with the R600 architecture, which was designed to be pretty modular/expandable
Where did you hear that the 4870 will be marketed at the same price point as HD 3870?
I actually don't think it will, the rv770 should perform significantly better and the die cost will be a bit higher. In other words, expect to see for around $250 (which is considerably higher than the 3870)
that and the 3870 won't be EOL for a while, why else would ati be bringing out a revision a12 die update?
FUD now says the opposite:
http://www.fudzilla.com/index.php?op...6829&Itemid=34Quote:
Originally Posted by Fudzilla
It's FUD but it makes perfect sense.
They might make the revision for mobile chips but press on with the 4000 series, which if rumors are correct, are already in production and may be in the hands of testers soon ;)
That, and don't forget, fuad doesn't have to be right, his site's called fudzilla for a reason;) I'm certain a12 hasn't been scraped unless a12 never existed from the get go. Because eitherway, we're supposed to see a gddr4 3870x2 that's shorter with the a12 die and plenty of mobile parts (and my guess is the rv670 will consume quite a lot less power than the rv770). So unless those are scrapped too because the rv770's progress is amazing, we'll see a revision a12 rv670
btw, $299 price for the 4870 means at the very least, it will be competitive with the 9800gtx. ofc, they could always change it last minute or something.
Prices not confirmed from reliable source . I think that flagship RV770XT will over 300$ .
I've already explained what the prices really are; price segment divisions, not MSRP.
I think you're wrong on over $300 though. RV770 is going to slide in and replace RV670, and AMD is competing on grounds of price/performance, not raw performance. And I don't think it has performance that makes it good value for $300... Value perhaps, but good, no.
//Andreas
4870 X2 looks good.
I expect the same performance levels as R680XT .
It's clear we're all waiting, but let's keep the reason.
Not to overestimate the Rv770 is the right thing now.....
Probably it will do sweet, but ind the offspring of HD3870, lots of people said : "the 8800gt Rival, and even cheaper....."
In my eyes, it's not the scene now.... almost same price, and the performance show their difference.....
That depends on how well the mcm design will do compared to the regular ol' xfire. Hopefully the performance will be a true 2*single die situation (which looking at how well core 2 duo and k10 scale multicore wise that may finally be the case) and that will give them the power they need for a high end gpu. If not, then I'll still be happy if its around $400 USD
On the chiphell's graphics roadmap was appeared a RV770 with specs 800SP/32TMUs . Real or not ?
256-bit interface isn't happening on the 4650 (the first midrange products) if one knows anything about ATI midrange history.Quote:
RV740 240 24 12 256-bit
Posts about FAKE Information removed, was derailing the thread, and people's IQs. ("ZOMG Fake" does not satisfy the reason for making a post)
Perkam
There is a discrepancy between the west and the east regarding the number of SPs. Here in the west, most stories claim 480 shaders, while most in the east says 800.
In my opinion, 480 is the more likely of the two, but I have a hard time believing that would bring any more TMUs. They would likely just add 160 more shaders then, and no TMUs. Although, I should say that I've been told that there will be more TMUs.
With 800 SPs it seems more likely with 32 TMUs. But then we have the fact that the core is said to be only 830 million transistors. 480 shader processors and 16 TMUs from a mere 170 million transistors. Somehow I don't see that happening...
//Andreas
Actually I'm willing to bet the rv740 will have a 256bit memory interface, as ati needs something to compete with the 9600gt. If it does have 256 bit memory, then we'll be seeing a monster midrange card. Thing about it, it has the same amount of TMUs as the r600 did but less shaders. So in other words it won't be any where near as bottlenecked, and if you look at how the r600 performed with roughly 2/3rds of its power being used, we'll see something comparable to the 3870 for the midrange.
I hope when the real numbers are out the 4870 would have a really good c/p ratio.
QTF, too much hope usually leads to disappointment.Quote:
Originally Posted by Helmore
AMD’s Radeon 4800 in production :
http://www.tgdaily.com/content/view/37025/135/
:up:
1GHz core plus gddr5?:eek: sounds great
now it sounds good with 4870.
:up:
I dont know about him being an AMD fan, but he has been known to add to the FUD:
http://www.theinquirer.net/gb/inquir...om-ghz-due-tlb
Perkam
http://www.theinquirer.net/default.aspx?article=41970Quote:
Originally Posted by Theo Valich
This was just enough for me to think twice before read his articles. The most of his articles is about AMD.
Metroid.
1,05ghz coreclock on an engineering sample is amazing. ES cards are always lower clocked then retail cards. What would be the retail speed? 1100mhz would be crazy.
i doubt that we see 1ghz core clocks with 55nm, maybe with 45nm but that will take some time.
I can confirm: under 1Ghz clock for the 4870, probably 900 and 800 for 4850.
I'm definately going to be upgrading to a 4870 instead of a second 3870.
Im no fanboy, but its just because I have a crossfire motherboard, so theres no point buying an Nvidia card if I cant add a second one later.
Also the 4870 should be $300 / £150 I hope, which will make it worth it if it really is going to be as good as expected.
single GPU gives less headaches as well.
True words to live by. I'm afraid to hope for shanghai because of barcelona's poor track record, but by doing that if it does perform well it'll be a nice surprise.
But I have no doubt the 4870 will be a very strong performer and will spank g92b unless that has some more shaders
Very interesting, does this mean ati will start selling their own cards again? I know they didn't with the r600 cards but they did before themQuote:
Both GDDR3 and GDDR5 memory will be supported by the chip, but ATI itself will only be offering GDDR5 cards.
we should see leaked benchmarks soon. Very soon.
Hope to see real benchies soon. The card looks promising on paper.
780G witch is the last iteration from ATI in 55nm goes to 1Ghz and more with passive cooling.
Anyway RV770 hardly will come to 1Ghz to have more yeilds. RV670 also goes to 825Mhz and more but they lowered clocks at launch to have more yields.
Maybe 925Mhz for HD 4870 and 825Mhz for HD 3850.
32TMU´s working at 925Mhz will give very strong texture performance that was lacking in RV670. RV770 looks a strong performer with very lower die size witch will keep prices very low.
People say GT200 is 1.2Bilion transistor in 65nm witch is crazy in die size and in price offcourse.
1GHz core clock is bull.
Launch is in May.
He says 32TMUs, but no mention of the number of SPs. Might be true, I'm still waiting for confirmation on both SPs and TMUs.
Theo has posted a lot of pro-ATI crap in his days, so I'm not sure what to believe.
EDIT:
Stay away from the benchmarks posted at zol.com.cn, they're fake.
//Andreas
So long as ATI can produce some decent drivers, this card could be very promising. I have high hopes for it indeed.
Anyone who says he can confirm anything about RV770 is just lying. If he has infos, he cant say anything - if he says anything, he has no infos.
Quote:
Originally Posted by RV770 Info
That's an interesting thought, yet untrue. One reason is that companies often deliberately leak data, the second reason is that leaks that are not deliberate can hardly ever be traced back. The third reason is that in many situations the companies don't really mind the leaking of selective details, in other words it's not worth the effort to trace the leak.
Yup, and i'm sure some folks here work for Nvidia/AMD/Intel etc. :yepp:
and the juicy part ( which explains things that we've seen time & time again ;) )...
usually you get some real info... and 1-2 fake stuff, so they can track you down if you leak the information as is ;)
For example:
Contact from Blah-Blah tells Bill "in June we'll release the new motherboard named BlahBlahSter Supreme and it will feature 2 Socket771s, 9 DIMM slots with triple channel support, 4 PCI-E 2.0 16x slots and integrated X-Fi SPU"... let's say that the triple channel support is the intendedly false info.
Bill goes ahead and gives the info to a friend to post it on various places with his nickname... the source spots the false info, and figures out who leaked the info ;)
Well all i care about is it workng on my PC. With my recent upset with ATI cards I am a bit scepticle. But if it performs any good I will definately get one, or two :p: .
Well this is nice and all but Ati drivers are still inferior to nvidia's. They really need to focus on making their cards more compatible with games.
to the specs were not true? :(
Not a real shaderdomaine, cause in this case it would be coreclk x shaderdomaine multi. But here both gpus, hd4870 & hd4850, have a 200mhz higher shaderclk and thus its just coreclk + 200mhz which cant be compared to nvidias solution (coreclk x 2.5).
According to that source, it's similar to G70/71 in that the shader clocks have a fixed delta above core clocks.
Here is a translation if anyone didn't understand the chart
http://i11.photobucket.com/albums/a1...ndle/RV770.jpg
Makes me wonder if the Pixel Fill rate is around 27.00 MT/s
Steamprocessor. Is that DX11 ?
No, still DX10(.1) - G80/R600 also had streamprocessors (no steam).
Sorry for the late reply, but as people have already pointed out, it isnt worth crossfire'ing 3870's on a P35 due to how slow the second slot is.
I benchmarked my 3870 in both the 16x and 4x PCI-E slots for a comparison, and the results were really shocking, even I didnt expect such a big difference:
http://i118.photobucket.com/albums/o...vv/16xvs4x.jpg
A second card in the 4x slot would really be crippled. I am hoping that Intel / AMD can come up with a cheap chipset option that supports 2x 16x PCI-E lanes, just like Nvidias 650i and 750i.
If you have an X38 or X48, then crossfire is great. But on a P35 it is a waste of money.
Not sure where he's getting 16 from, but I believe he means 32 TMUS * 850 mhz (which it should really be 1.05ghz going with his logic) = 27.2 MP/S.
I see, thanks :)
Just if RV770 has 16 ROPs, this suppose a pixel fill rate of 13,6 MP/s, not around 27 MP/s ...
Texture Fill Rate = (# of TMUs) x (Core Clock)
Pixel Fill Rate = (# of ROPs) x (Core Clock)
highly doubt that could possibly happen, that mean the alu:text ratio would go even lower, which would never happen after the r600's failure to take the crown. If anything, it would go to 24 TMU if they didn't change anything besides upping the shader count 50%
But the history shows that ATI is increasing ALU:TMU ratio in every new generation .
Thanks . In Chiphell's rumours are writting that ATI will save the TMU-ROP ratio of 1 : 1 etc . 32 TMUs/32 ROPs .
And obviously they've learned that doing so isn't applicable from how badly the 8800gt spanked their offerings when the 8800gt had roughly 1/3 the alu power but something like 4 times the texture power. In order to make it competitive, they had to reduce the ratio, I don't think anyone would have bought the product if they didn't as gt200 would kill it without breaking a sweat (with af enabled). I personally want to see either a 1:2 ratio or if possible a 1:1 ratio again, that's proven to give the best results and it will continue to unless games are coded differently (which won't happen any time soon).
As for the rop power hampering performance, I doubt it, as aa is handled by the shaders with the r600 design, so it was actually the af that was killing performance, not aa. But I would like to see a 1:1 rop:tmu ratio, as I just said 1:1 ratios have proven to work with today's games and that's what matters for us. Servers can make more use of the shaders, so the r600 made a great firegly, but still a decent gamer
Besides, just to point something out, g92 has 16 ROPs but look at how it handles aa
Therefore I think that the Eastern rumours sound more reliable cos ATI is fighting for the 200-300$ market and if the GT200-GT is 2 x faster than G92GT, AMD cannot able to catch up with NV .
perhaps, but its not meant to, the r700 wil go against gt200. Not to mention, because of its design, the r700 actually be more powerful than simply 2 xfired rv770s.
That are just mouth full of lies .... 800SP is just kiddy talk. There are some "shoots of the chip" and they talk about 220mm2 so that could keep themselves (ATi) from market failure they done last year with enormously extra large R600 core. In the end its only 480SPs more TMUs (32) and only 16ROPs again so that the only radical performer will be gang member RV770x2 (R700)
btw. in most games old R600 underclocked @500M is pretty monsterous, so if you kept yourself away from crappy Crysis and Clones stuff you don't really need even that mainstream monster of RV770.
ntm. that only RV670 core will be inside Fusion an that's going to see light of the sun late next year.
i have faith that the producers know exactly what they are doing.
my early attempts at connecting an abacus to an exosketch were a dismal failure...:brick::p:
No one has presented any actual truth for either of the scenarios. If you go back and read some of me other posts you will see that I'm very doubtful of 800SPs. I'm only pointing out that Eastern sources are very persistent in their claims of 800SPs. They think we are just as stupid clinging to 480SPs.
And the chart posted here (http://www.hardware-infos.com/news.php?news=2008) looks like it was based on the stuff Theo posted.
//Andreas
I imagine the 800SP talk came form the X2 itself, which will ultimately comprise of 960 SPs if the 480 number is true. When you think about it though, the increase in shaders alone is enough to bring out a theoretical 50% (closer to only 20% in reality due to diminishing returns) increase in performance in games, and that is if they left it at 480/16/16. The doubling of the TMUs only adds to performance and to my belief that ATI doesn't need 32 ROPs at the moment, even with the GT-200, as it can mass produce the 4870s on 16 ROPs with similar architecture to the 3870s, keep its costs low and under cut the GT-200.
Remember that your argument of prior generation sufficiency works both ways. If most people will never need more than a 500mhz core RV670, then they will never need a GT-200 if the RV770 or RV770 X2 ends up cheaper, which in my experience is very likely as, if it ends up on the same architecture as the HD 3800 series, their costs, even for the X2, will be lower than Nvidia's brand new architecture.
With that said, I am a firm believer that people at XS will buy the card with the best performance, irrespective of the price, so there's little doubt that ATI will need a more powerful high end, considering their past record against new Nvidia cards (7800 vs R500, 8800 vs R600).
Perkam
FUD! RV770XT supports 512bit memory :
http://www.fudzilla.com/index.php?op...=6994&Itemid=1
According to this scenario the specs should look thereby : 480/32/32 or 800/32/32 .