^No point in taking random forum posts as gospel.
Printable View
^No point in taking random forum posts as gospel.
Then, he was equally wrong. Because AMD don't have to support Havok GPU physics either.
AMD has to support OpenCL, DirectX11 Compute Shaders, and/or whichever GPGPU APIs they want.
Then, software developers can make software built over those APIs/libraries (OpenCL, DX11 CS, CUDA...) and any device compatible with those APIs will be compatible with the software, in the same way than graphical software built over OpenGL and Direct3D.
AMD has nothing to release about Havok. Havok (the company who developes the library, and which is owned by Intel) are who have to release "OpenCL hardware acceleration". And then, it will be compatible with every single OpenCL capable GPU, be it from NVIDIA, AMD or Intel.
ATI's older attempt of GPU-physics using Havok was cancelled, OpenCL is the new attempt ;)
Old one:
http://ati.amd.com/technology/crossf...ics/index.html
I don't think so. As Iargon says, there were demostrations of Havok running over OpenCL which included videos not long ago. You can even watch a demo video of OpenCL version of Havok Cloths with some red dressed dancers on YouTube. And there is an official announcement saying which are going to be the first Havok physics parts which are going to support GPGPU hw acceleration through OpenCL (Havok Cloths and Havok Destruction, both non-free to license).
Maybe you're talking about the former attempts to accelerate Havok with shaders in a GPU which resulted in a first physics GPU accelerated library called Havok FX, which was cancelled in the end...
EDIT: FischOderAal answered before...
with the info date being tomorrow i'm getting excited! lets hope its not another 'black' situation (i think) where there was a launch date and only a picture on an amd web site (at least i think)
I don't know if we'll see any new info tomorrow. The tomorrow US presentation for press is, AFAIK, under NDA as it was the yesterday european one... I hoped for some leaks yesterday, but nothing, I suppose AMD is going all for the "top secret" route this time... and now I don't see any reason to think that tomorrow is going to be any different. There's people talking about 16th, some others 22-23th to the NDA expiring date...
Damn, I hope we get some leaks before that date, but it's not like if it was happening, from the looks of it...
Yep, I'm not expecting any hard numbers on technical specs let alone performance from Sep 10th event. Likely it's just the usual PR BS with DX11 demos and some general yapping about the new features.
Aren't they handing out some pieces for the reviewers then?
Ohwell, they're under NDA and it's lifting in like two weeks. :banana: :(
So, quite possibly, 1600 SP, 80 TMU, 32 ROP, 256 bit GDDR5 for Cypress ?
AMD HD 5850/HD 5870: 1600 SPs and 80 TMUs
Just 1 day left to clear up any doubts :)
I like the shader count but the 256 bit memory really annoys me for some reason.
Yeah, this silence is deafening. :cool:
Perhaps if the available quantity is good, there's a chance for a reoccurence of HD 4850 prerelease wide availability in the market ?? :shrug:
Yupz, me too, though perhaps ATi's engineers have developed a better data compression technique or using faster mem chips. RV740 shows that the mArch isn't all that bandwith starved as predicted, more of mem capacity limited in huge rez. :yepp:
Any info about the everlasting rumor of MCM packaged X2? Wasn't there a picture of the heatsink which had two thermal pads?
Edit: Also any info about the "new cooling technique"? The one which uses a vapor chamber as it's base. IIRC it was developed with some cooling company.
That's what I expected... at least the last ATi cards launches, the NDA ending date was the same than official release date, if I remember correctly. I'm not sure about the new cards NDA ending date, because I've seen different dates here and there (16th, 22th...) but all of them seem to be in September.
So I think that the cards will be released this month :yepp:
As long as they use some faster memory than last time (and if they're going to use only GDDR5 this time, at least that will be the case for the 850 model) they'll be fine, I think. I think that memory bandwidth was more than what they needed with HD4870, because it had 2x the memory bandwidth of HD4850 and if it was fully used it would have bottlenecked more the performance of the 4850.
Even if they use the cheaper 3600MHz memory that they used last time, it would be double the bandwidth for the 850 model (even with the same memory interface, and for a doubling of processing units), and they could use faster memory for HD5870 (indeed, I expect they do, to differentiate more both models).
So if using GDDR5, at least the 850 model is going to have double the bandwidth which seems appropiated if they are really doubling the processing units number, and probably the 870 model will need less increase than that.
5870 photographs :
http://gathering.tweakers.net/forum/...ssage/32550799
If that's the pic of a single HD5870, I can't imagine the lenght of a hypothetical HD5870x2, maybe 12"?
I hope the 5850 is diff cause I want to go h20 on it,and only use single slot not dual.:shrug:
i think for that price you would rather see more ram, not faster bus speeds. and does anyone even know what speed the GDDR5 will be thats used? and do we know if ATI tested this with even faster ram and decided the cost/benefit was not necessary?
if they made it bigger to get 512bit, and it had worse yields, cost would probably increase much quicker than performance, and no one would be happy
I find it strange to see people complaining about memory performance when we haven't seen a memory bottlenecked card with GDDR5 yet. Maybe we'll see it with 5870 but I don't think so. And I don't understand the reasoning of complaining about die size /price ratio. It should only be performance/price that's to anyone's concern (and maybe performance/performance)...
im not sure if ive seen this but is the 5870 1gb or 2gb?
ahh that cards only very slightly if not the same size as 4870 so i expect it do be the same as the 4870x2 atm...
you can see if you get a pic of a 4870 from google, from a similar angle, line them up under each other and you can see =)
no.. no bottleneck hence why the 4890 2gb cards are fine.. honestly you could up the clock 200mhz on the memory and no difference in performance maybe 1/2 % the performance was all in the core clocks etc..
Nah it's fine the vapour card iirc, uses half slot vent, the difference it makes is neglible so its all good :D
Put 5.5 Gbps GDDR5 chip on the HD 5870 variant, clock it at 1.25 GHz, everybody should be a happy camper, especially if there would be 2 GB VRAM capacity equipped variant. HD 5850 can have 4 Gbps chip clocked @800-900 MHz.
For radial fans(which can generate alot more pressure than conventional axial fans) the small exhaust will not be a problem as long as the air is well directed towards the exhaust.
oks i got confirmation of specs for the new cards
5870 = 1600SP's, clocks at 850/1200
5850 = 1440SP's clocks at 700/1000
Sounds like the 5870 should perform like 2 4890's in crossfire.
http://alienbabeltech.com/main/?p=11135 claims those specs, yeah. It would be a lot faster though due to no crossfire scaling issues if that's the case.
If the 1600sp, 80 TMU and 20 ROP?? rumour is true I seriously believe that it will be at least 10-20% faster than a 4870x2 and probably even more. This is a single chip card thus you aren't going to have scaling issues or any of the caveats that come along with any multi-chip solution.
It would be as if the 4870x2 had 100% perfect scaling plus some additional horse power.
If the 4770 is any testement to the overclocking capabilities of the 58**'s we are in for quite a treat and I if I were a betting man I'd bet TSMC's 40nm process has matured quite nicely over the past few months.
This is all based on speculation and personal opinion but that is how I feel.
While there is more muscle in the gpu if its still a 256bit memory bus then bandwidth is pretty much what a single 4870/90 is, half of an x2. So even if it performs on par with an x2 thats a pretty admirable gain.
One more day for some leaks!
That's not how it works. For all practical purposes, the effective bandwidth of the X2 is the same as if there were only a single card. In the future a NUMA system might come into play where both memory controllers are in communication and then you could (theoretically, assuming the link between memory controllers is fast enough) have 2x the bandwidth in a practical sense.
This is addressing comments about memory bandwidth. The hd4870 has 115gb/sec gddr5 memory, if you take the 4850's limited 64gb/sec memory bandwidth and apply it to the same clocked 800 shaders of the 4870 (750mhz ie an overclocked 4850) you only lose about 5% overall performance compared to the stock hd4870. The 5870 with 153+gb/sec gddr5 gives plenty of bandwidth for 1600 shaders considering 800 shaders at 64gb/sec gddr3 nets you 95% of 800 shaders at 115gb/sec gddr5. The bottleneck must begin somewhere below ~70gb/sec ddr3/5 for 800 shaders in ati's architecture. This would mean that 1600 shaders only needs greater than 140+gb/sec memory bandwidth to operate at it's fullest potential. I also found this graph on TPU where wizzard was disabling SIMD units in rv770 for his 4830 review to see how many FPS he could get with 80 to 800 shaders in increments of 80. Seems like as he enables shaders the fps increases linearly, which is good news for going from 800 to 1600.
http://tpucdn.com/articles/155/images/perlin2.jpg
http://www.techpowerup.com/articles/other/155
jaredpace: I find that surprising, but how does that explain good performance gains from OC'ing DDR5 speeds in 4870, if it's so much over the bottleneck?
Latency.
What good performance gains?
how much of that is due to bandwidth saturation, and how much of that is due to latency? All else being equal, GDDR3 outperforms GDDR5 at equal Hz.
In any case, we'll just need to wait and see. After we have the cards in hand it won't be hard to see if core and memory performance is balanced. Maybe it's imbalanced because AMD is planning on debuting faster models with faster memory. Or maybe we'll finally see some kind of shared memory on X2, and the bus AMD is using between memory controllers would be oversaturated (as well as having less space along the edge of the chip for it to fit on) with a larger memory interface. Maybe they've done some tweaking so they make better use of bandwidth? At the very least the enhancements from RV740's memory controller will be brought over. For all we know RV870 could use tile rendering! One more day.
Well here are two reviews supporting this theory of a ~5% difference at the same clocks:
http://www.ixbt.com/video3/rv770-2-part3-d.shtml#p18
http://www.techenclave.com/reviews-a...ew-115201.html
In the first link they've down-clocked the 4870's core to 4850 speed. In the 2nd they've overclocked a 4850.
If there are small 2-3% performance gains from overclocking a 4870's memory, it is due to decreased latency as mentioned above, not increased throughput. On top of this, I doubt they would release the new "cypress" architecture using specified memory modules that create a bandwidth issue that impacts the chips potential performance. Especially since rumored 6 and 7ghz GDDR5 have existed since Feb. 09, they are only using 5-5.5ghz. That would be kinda hair-brained, so the the chip must only be starved for bandwidth when it's being fed at < 150gb/sec. The Oc'd 4850 showing 95% success rate of the rv770's full 800 SP @ 64gb/sec is proof of this theory.
From gurusan (he concludes 7%): http://img127.imageshack.us/img127/7107/36267797qz2.png
Anybody has any idea as to the amount of Amps the 5870 would need on the 12V rail ???
TIA
Normal being, i7 920 @ 4.2GHz with 6x2GB DDR3, abt 5TB HDD n 8 x 120mm; 2 x 140mm LED fans; 6xUV lights .. along the 5870 on a Corsair VX550W ???
Its got 41A on the 12V rail (single)
Runs fine on the 4890, ATM.
scaling isn't linear with increases in transitor count either... remember the release of the gt200? 240sp's vs the 9800gx2 with 128x2 sp's, and performance was on par! only in a few circumstances (games that simply DON'T support sli) is the gx2 slower. there are even circumstances where scaling is over 100% with two gpu cores (certian games with a 4870x2). i feel that there are just too many variables to start proclaiming "XXXX WILL BE THE PERFORMANCE OF XXXX CARD!"
anyone else excited about tomorrow? ;)
9800gx2 128SPX2 Core Clock 600 MHz Shader Clock:1500
GTX280 240SP Core Clock 600 Mhz Shader Clock:1300
4870X2 800SPX2 Core Clock 750 Mhz
5870 1600SP Core Clock 850 Mhz
5870 should beat 4870X2 without doubt.
I think a nice 550watt corsair would handle a single 5870 just fine, even with overclocking on the cpu, gpu, ram and all your peripherals. I wouldn't count on getting a second card for xfire though.
http://vr-zone.com/articles/suggeste....html?doc=7470Quote:
PCI Express® based PC is required with one X16 lane graphics slot available on the motherboard
500 Watt or greater power supply with two 75W 6-pin PCI Express® power connectors recommended (600 Watt and four 6-pin Connectors for ATI CrossFireX™ technology in dual mode)
Certified power supplies are recommended
Minimum 1GB of system memory
Installation software requires CD-ROM drive
DVD playback requires DVD drive
Blu-ray™ playback requires Blu-ray drive
For an ATI CrossFireX™ system, a second ATI Radeon™ HD 5870 graphics card, an ATI CrossFireX Ready motherboard and one ATI CrossFireX Bridge Interconnect cable per graphics card (not included) are required
Not so fast. Keep in mind, the 4870 had the SPs and TMUs increased by 2.5x over the 3870, and it isn't even twice as fast the great majority of the time. 4870 to 5870 is only a 2.0x increase.
I have no doubt you could easily run a single 5870 on a 300W PSU. No way it would draw anywhere near that from the wall.
I run this rig:
Intel Core i7 920 @ 3.72ghz HT on | GTX 280 1GB | Bookshelf Spkrs.| Crucial 256GB Indilinx Controller/Samsung NAND SSD| 6GB DDR3 1600 @ 1560 8-8-8 1T | X-Fi XtremeGamer PCI | Asus P6T vanilla X58 | Dell 3007WFP-HC 30" S-IPS LCD @ 2560x1600 | CoolerMaster RC690 Chasis | MX518 Mouse | 6x 120mm fans | + 2 normal HDD's
... 100% rock-solid on my Corsair HX 520w PSU. So, I've little doubt that your 550 VX series would run a 5870 fine unless they have absolute madness for their power requirements. I'll be using this PSU myself for my single 5870, though if I go Crossfire I have a spare 750w high-quality PSU on the shelf that I never put into a rig.
-3870 (and X2) had a higher clock rate than the 4870, so the difference in power was closer to 2.1x than 2.5x.
-4870 was almost always faster than the 3870X2, especially with AA
-There can be other bottle necks besides vid card limiting scaling, masking speed increase.
-Even though they were strengthened in some areas, RV770 shaders were weakened in many ways from RV670 shaders to make them smaller. Overall a good tradeoff.
-There is the occasional freak coincidence where AFR results in over 100% gain, but most of the time doubling resources will yield better results.
-Perfect scaling is very rare in any situation.
well good to see something getting leaked.
i just wish they would hurry up and come out!! although Im going to wait until they have something other than stok reference coolers.
Thanks guys :up: now I can get rid of my old stuff and get the 5870, ASA its available.
Later, I'll get more of em .. maybe 3 .. even 4, if I can get the classified. And CM will be sending me the UCP 1100W for then :D
22nd of September is the launch dates for the cards, expect stock to be easily available in early October.
everywhere i see info and leaks on the internet , and now this thread seems dead . haha
launch on 23rd of September , I hoped for earlier since my girlfriend is leaving to hong kong soon .. :)
The only computer company immune to the whims of the industry, consumers, recession etc is INTEL.
Whether you are buying a Notebook, HTPC, Netbook, Desktop, Server, Workstation
If your OS choice is Mac, Windows, or Linux.
GRIM REALITY
There's a 80-90% chance it will be INtel system. 90% of which use Intel chipset. And at least half of desktops and notebooks with integrated Intel graphics (which FYI holds MUCH more graphics marketshare than either AMD or nVidia).
TOUGH CHALLENGES
So the challenge for nVidia and AMD (especially since first with DX11 and trying to sell Phenoms):
- powerfull marketing to turn consumers off integrated. nVidia's "balanced PC" is too little too late.
- back 5 years ago, it was easy to point out lack of video acceleration, shaders and game compatiblity... but IGP is catching up.
- convince existing HD4870/GTX275 owners their shiny almost new DX10 hardware is crapola.
- have enough new features (btw I find DX11 Tessellation and Compute Shaders to be just a rehash of same old...)
- new must have DX11 games... where are they?
AT THE END OF THE TUNNEL... netbooks
Finally, most (non-super elite lv60 wizzard with Ferrari in driveway) consumers have limited budgets. Consumers are going to be tempted to spend money on CPU/mobo (ie Lynnfield, 32nm Westmere etc), 42" HDTV, an Intel SSD flash drive... or best of all, the exploding thin notebook and/or netbook segment, where there is no market whatsoever for HD5850 or any add in graphics card at all. Also recall, only Intel makes wide range of mobile CPUs and is the defacto choice.
Once again all your bases belong to Intel.
Bottom line.. I dont think HD5850/HD5870 will have as good sales as HD4xx series... at least as long as gloom of recession is around.
How many people actually run multimonitor. In a work space yes, but to actually have a significant impact on gaming and people wanting to buy it for this feature. No way.
How many people have two monitors at their main computer? How many people have 3 monitors(which is when eyefinity begins to shows it worth)? I can imagine less than 10,000. Especially if you consider the gaming crowd. Eyefiniti is a gimmick for lottery winners who actually have time to game rather than just work.
If it works as well as they say it does, I could see myself going to a 3 monitor setup. Imagine finally playing games while having peripheral vision. It's the difference between viewing the world normally vs viewing though goggles, or driving a car normally vs driving only seeing through the front windshield. To me the difference could be huge.
That is the problem.Quote:
Once again all your bases belong to Intel.
Good thing we still have a choice. And I'm grabbing to that choice.
I don't need the fastest CPU possible. I don't do hardly any encoding. I don't care whether Photoshop filter takes 5, 7.5 or 10 seconds. I know performance of a desktop computer can easily be increased via eliminating real bottlenecks, such as taking out the HDD or bumping GPU performance. For me CPU is dead. And Via Nano is the way to go for netbooks, and hell, it's better than Atom anyways...
/offtopic