Oh wow, ONE DX10 permutation is faster than DX9 in a sea of thousands. :rolleyes:
Printable View
Personally, I think DX10 was intentionally designed to be inefficient and 'broken' from the beginning. Why? Because Microsoft is evil and they'd rather have people buy Xboxes than PC games.
Think about the extra development time and costs required for DX10. If you make a new cutting edge 3D game for the PC, you'll need to tweak and provide support for both DX9 and DX10.
If you skip DX10, consumers will think that the engine is old and you lose sales and get negative press. If you make it DX10 ONLY, then that cuts off all the XP users and anyone without a DX10 card and you lose sales. The whole "DX10 is for Vista only" thing is a CLEAR indication that Microsoft doesn't care about benefiting the gaming industry as a whole, it's only looking to sell its products no matter how crappy and inefficient they are.
Sure, you can try to blame the GPU companies for making a bad DX10 GPU implementation but do you really think that would be the case after they've spent hundreds of millions of dollars developing the GPUs?? These aren't monkeys working in lab coats, they're professional ASIC engineers.
That is incredibly irrational and stupid, you would not design the one thing that sells your OS (Vista) as being crippled from the beginning. I think the main problem at the moment is the game developers, they are not accustomed to the new API which makes for inefficient coding. Which also means that it's not really their fault, it's just part of the learning curve (DX9.0 has been there for quite some time). You can also of course partially blame GPU drivers, but to what extent.....
4870 X2 scares me...960 Shader Processors 0_o...now if only they can increase onboard x-fire efficiency so that we can see true 2x performance.
However, I do have a problem with ATI laying on more and more shaders instead of making architectural changes (like Nvidia with the GT200), as ultimately, even if they go 45nm (ATI that is), they will start to see diminishing returns on extra shaders very quickly.
Also, as the lower end HD radeons end up on laptops, WTH are they doing giving them 40 shaders each ?? That's 10x less than the mainstream cards !!
Apparently the r700 will be an mcm, not just two rv770 dies on the same pcb, so that plus the shared memory bus should greatly improve xfire performance. But I totally agree, unless if games start being coded for 960 shaders with very little texture power, more than half the power won't be even be touched and we'll have another r600 on our hands
hm, ten months ago there was a romour R600 has actually 96 procs, 64 of them are working and 32 are locked
And here i see they turned them on finally :)
Interesting, but highly unlikely, if that was the case, they would have been unlocked with the rv670 and that would have became a high end product instead of performance. Only reason I can see for that not to happen would be because the r600's ringbus would have been needed for those 480 shaders and so that idea wasn't implemented, but I still find it highly unlikely. Not to mention even if they were locked ati would still need to implement them in the die, meaning the transistor count would be way higher, think about it, g92's transistor count is higher than that of the r600 but it only has 128 shaders
So going from a 4:1 ALU:TEX ratio to a 3:1 ALU:TEX ratio is bad?
AMD/ATi isn't only increasing the shaders to 96 but also doubling the TMU count. They should also be increasing the z per clock from 2x to 4x.
Not true at all... AliG covered it good enough.
This expansion was somewhat expected anyways with the R600 architecture, which was designed to be pretty modular/expandable
Where did you hear that the 4870 will be marketed at the same price point as HD 3870?
I actually don't think it will, the rv770 should perform significantly better and the die cost will be a bit higher. In other words, expect to see for around $250 (which is considerably higher than the 3870)
that and the 3870 won't be EOL for a while, why else would ati be bringing out a revision a12 die update?
FUD now says the opposite:
http://www.fudzilla.com/index.php?op...6829&Itemid=34Quote:
Originally Posted by Fudzilla
It's FUD but it makes perfect sense.
They might make the revision for mobile chips but press on with the 4000 series, which if rumors are correct, are already in production and may be in the hands of testers soon ;)
That, and don't forget, fuad doesn't have to be right, his site's called fudzilla for a reason;) I'm certain a12 hasn't been scraped unless a12 never existed from the get go. Because eitherway, we're supposed to see a gddr4 3870x2 that's shorter with the a12 die and plenty of mobile parts (and my guess is the rv670 will consume quite a lot less power than the rv770). So unless those are scrapped too because the rv770's progress is amazing, we'll see a revision a12 rv670
btw, $299 price for the 4870 means at the very least, it will be competitive with the 9800gtx. ofc, they could always change it last minute or something.
Prices not confirmed from reliable source . I think that flagship RV770XT will over 300$ .
I've already explained what the prices really are; price segment divisions, not MSRP.
I think you're wrong on over $300 though. RV770 is going to slide in and replace RV670, and AMD is competing on grounds of price/performance, not raw performance. And I don't think it has performance that makes it good value for $300... Value perhaps, but good, no.
//Andreas
4870 X2 looks good.
I expect the same performance levels as R680XT .
It's clear we're all waiting, but let's keep the reason.
Not to overestimate the Rv770 is the right thing now.....
Probably it will do sweet, but ind the offspring of HD3870, lots of people said : "the 8800gt Rival, and even cheaper....."
In my eyes, it's not the scene now.... almost same price, and the performance show their difference.....