CM590 looks pretty big:
http://www.coolermaster-usa.com/uplo...9/feature8.jpg
dunno if its enough...
Printable View
CM590 looks pretty big:
http://www.coolermaster-usa.com/uplo...9/feature8.jpg
dunno if its enough...
only thing with clocks is keeping it under the 300w we both knwo that :D seems like with "unlocked oc" that they are confident it'll do stock clocks but its up to the enthusiast/oc'ers to do it theirselves so its not their fault it'll go over 300W
price thing im kinda pissed at... although im sure a miraculous 100/150 $$ price drop will occur when nvidia sort it out :)
It will be an awesome card if the temps and noise are bearable.
I hope 4 HD5970 can be installed on one motherboard in Linux
Offering 2TFlops of double precision add performance, and about 1TFlops DP mul perfomance
At this point don't talk single precision, it doesn't make sense anymore
zalbard - I do not think that the actual monitoring software reports the temps correctly. The GPU's don't go above 72 oC with 100% fan, acording to the sensors, but the metal on the card get hot (and I mean "leave your skin there if you touch" hot), and the air exhausted is also very, very hot.
jaredpace - yup, good old trusty Volterra, software controllable, but no support from available applications yet.
^ the 4870x2 cooler feels like that but core temps usually hit 90c from my experience...whats important besides just core is vrm temps..
So this card is 3" longer than the HD4870X2? I doubt it will fit my full tower Armor without touching the hard drive. :(
http://www.techtree.com/India/News/D...07523-581.html
...aww i should have waited for this one, instead of the 5870 :(
Oh my good!!! That card is wonderful!!!!!!!
whau, nice card
Already got my savings standing by to buy one :)
A post on TPU suggests that the cap on the OC for the dual cards has been increased from the 900MHz of the 5870 and that they are out tomorrow (18/11/09)!
Woot - uncapped overclock limit!
http://i35.tinypic.com/2641gr9.jpg
I hope I've been good enough for Santa to bring me one of these!
how long is this card exactly?
350mm?
343 mm.
Cant imagine how hard it is going to be to get our hands on these, considering the availability of 5850/70's :(
Well the GPU cores are downclocked obviously for heat reasons. So I am automatically going to get a FC water block. The 40nm process had problems with leakage so the heat will dramatically increase on the GPU's I am sure when OC'ing and the stock cooling will only go so far before things just start to melt or faults and errors occur. I am not sure myself if I should wait till the process improves so that I get a better OC card with less leakage on the GPU's.
From the techtree article. If this 2x128-bit memory interface information is correct, expect a MASSIVE hit in performance in high texture DX10 and DX11 games.Quote:
this graphics card again may use 128-bit memory bus interface.
EDIT: that would also explain some results some editors (myself included) have been getting.
I am also getting weird 2560x1600 results ( I suppose that is what you are talking about) and I see that nobody has a working GPU-Z for these babys. However, I have the feeling that the problem is not with the memory bus, and from my math we should be loking at a 2x256 bit card. Bu anyway, we'll see more about that once the reviews are out.
Astennu - I measured ~30.5cm :)
You're actually the 4th editor who has confirmed the issues with me. Honestly, the card is behaving like it can't access its full memory alotment in situations that need a large framebuffer. It IS supposed to be a 2x 256-bit bus but ironically the ATI documentation does not mention anything about the bus...just clock speeds. That's a contrast to the HD 5800 and HD 5700 series documents that showed quite clearly their 256 and 128 bit specs. :shrug:
You don't really think the card has a 128 bit interface, do you???
Please believed me that I had the EXACT same feeling when I was reading the documentation and I noticed that the bus width is not there. Also, if you think that there is no GPU-Z out there capable of reading the specs correctly, you might get a "fishy" image about all this. So I also gave this a lot of thinking today, but I strongly think that the bus is 2 x 256 bit, due to the bandwidth that they show in the specs. However, the fact that a card which has a key selling point in working with 3 2560x1600 monitors, but however performs slower exactly in that resolution is at least...ironic :)
Luckily for ATI, the performance in 1920 and 1680 compensates for the behaviour in 2560 in some games, and most of the gamers do not own a 30" monitor yet.
LE - unbelievable - GPU-Z 0.3.6 does not work with ATi reference samples but it does work with retail cards...sigh...
What I am more interested in knowing is what is holding it back. Especially versus a HD 5850 Crossfire setup. Another possibility is memory timings but I am just brainstorming here.
1920 and 1680 resolutions are besides the point here since performance will be CPU limited even at 4 / 8x AA in literally every game. Trust me, with a 4Ghz i7, the graphs look pretty darn flat...
the 128 has to be a typo. read the context
hes comparing to the 5870 specs, and must have wrote down the wrong thing.Quote:
this graphics card again may use 128-bit memory bus interface.
What about the fact that it has only 2GB (1GB per GPU) ? Doesn't that have anything to do with it?
My graphs really do not look flat in FC2 or Hawx at 1920 :)
Anyway, the bandwidth might be enough for 5850, due to the crippled Cypress, but it is not enough for 2 "full-option" Cypress like HD 5970 has. What is changed from HD5870? The bus is the same, the number of shader procesors is the same, the only different things are the clocks. Which give less bandwidth/ GPU compared to HD 5870.
Hearing how you guys talk about cripple performance at 2560x1600 makes me even more depressed. :(
Yeah, Warhead either I guess.
DX10 is another matter altogether. ;)
The performance issues could be due to loosened timings on the memory. Has anyone been able to read the voltage used for the GDDR5. I haven't got there in my testing but it is possible that ATI lowered the GDDR5 voltage in order to keep power consumption under 300W. Therefore, they also needed to loosen timings in order to stay at the desired speed.
Man, I'm grasping for straws here...
ble i really hope ati gets the shortage of cards sorted out with the release of the 5970, just looked at retailers and nothing there... well yeah maybe for 450€+ (a single 5870) :rofl:
1.10v for the GDDR.
The problem has only one source, one big fat engineering...let's call it compromise. The memory clocks. 1000MHz with 2 x 256 bit bus simply does not cut it for these beasts. Of course, they clocked the memory down so they can lower the volts, so they can keep the power consumption and heat disipation low.
Solution is pretty darn simple. 1.15v in GDDR, 1200MHz on the GDDR, and 2560x1600 is back on track, that's what I did :)
Because the 5850's GPU does less operations/second compared to a Cypress. Imagine that HD 5970's GPU's can process more data then 5850's GPU, but the bandwidth is not enough to insure an optimum flow of data. Now, I am not a programmer, but what does that lead us to?
Strange 1000 MHz GDDR5 should be enough for a 725 MHz Cypress core with 1600 shaders. I think its a driver isssue.
Look at those overclockers that do 1300-1400 core with only 1250-1300 MHz memory. And still have a huge boost. at the start i thought the RV870 and the RV840 where memory bandwith bottelneked but if you look at the OC results you gain way more by overclocking the core.
I think and hope this is a driver issue because i have a 30" screen. And a HD5850 @ 900-1125 is to slow in some games :)
I don't think we are talking about the same thing.
This card is supposedly = 2x HD 5870 cores underclocked to HD 5850 specs.
Bandwidth (on paper) should therefore be ~ the same as two HD 5850 1GB cards.
Therefore this card should = 2x HD 5850 (+/-). However, in higher resolutions in DX10 games, it looks to be getting spanked by two HD 5850 cards.
.
.
.
.
As for "driver issues", I don't play along with that. In newer games, sure. However, we are talking about HawX DX10 and Far Cry 2 DX10...two games that were released months and months and months ago. If ATI is having "driver issues", that is simply an embarrassment of epic proportions.
have u tried to overclock memory ? or gpu or bring it to 5870 specs ? all you do is whinne here lol
SKYMTL - It is not 2 x HD5850. It is 2 x HD5870 with HD 5850 clocks. A small difference between 1440 stream procesors and 1600 stream processor, and 72 texture units, versus 80 texture units. Even with lower GPU clocks, one GPU from 5970 is still capable of more calculus compared to 5850. Anyway, overclock the card to 725/1200. You will have a nice surprise :)
Overclocking and 2560x1600 have nothing in common. This card is a beast in 1680x1050 and 1920x1200 and in benchmarks. The problems start with 2560x1600. Now, when we run 2k3, 2k5, 2k6 or Vantage for ranking, the resolution is either 1024x768, or 1280x1024. No bandwidth issues there :)Quote:
Originally Posted by Atenuu
I am very pissed.
1: Frame time is roughly; CPU time + RAM time + interconnect time(CPU/NB <-> GPU) + GPU time + VRAM time. Understood?
2: CPU time does not change regardless of the resolution, unless CPU is doing per pixel ops. AI, Input handling, event handling, physics, game state updates, game world updates etc aren't tied to resolution.
3: GPU time is directly porpotional to the game's visual quality settings and resolution.
But hey, just overclock the FRAMEBUFFER to see if it helps.
:ROTF:
You and I just said the same thing. ;)
My point was that on paper, the HD 5970 has better specs than 2x HD 5850 yet gets beaten out by the dual card configuration due to what seems like bandwidth issues. I was wondering out loud where those limitations came from considering the clock speeds. :up:
SKYMTL - Ok, I get your point :)
Calmatory - Carefull with that "physics" word on this thread =))
Maybe you could determine the size of the bus by the number of memory chips plus some info from their datasheets.
I dont think its the PLX chip, the plx chip basically makes the 16x pcie slot into two 8x slots.
So that means performance similar as 5850 crossfire on say a 780X/780GX/P55. But if one uses PLX on a 8x PCIe the 8x speed will go down to 4x.
5850 CF with 780X/780GX gives similar performance as 5850 CF + 780FX so i dont think PLX is a issue.
ok seems i am gonna wait for pro reviewers lol first its 128 bit then its undervoltaged now plx isn't enough
Thought I'd share some google findings enjoy :)
http://www.hardwarezone.com.sg/artic...?cid=3&id=3072
Doesn't make sense to me to even consider this card to have a 128 bit memory bus. That would mean that it is using a new variation of silicon (a new die) or the memory bus was clipped in half. Neither is plausible.
The card itself, somewhat disappointing to me based on what I've seen so far. I don't really see a problem with the length though, if you plan on putting a 300 watt card in a smallish case you are just asking for problems.
300 watt is not a problem, as long as your PSU can deliver (and most do), and the cooler can keep the card cool. But you can always upgrade the case-cooling/airflow, or the GPU-cooler, anyways.
The performance is the most important, can it match (or at least get close to) the 2 x 5870 will decide how good it will sell, I believe.
besides all those hardware case must carry cooling system. my has dual loop watercooling internal, and 2 reservoirs in the middle so i have only 28 cm space for video.
and now i am in doubt what to choose - 5970 and do a little mod in case or get 5870 in cross.
length may be an issue for many i think.
also knowing this card can overclock to the speeds of the 5870 and get you the performance you're looking for...
ati only downclocked it to get the TDP it wanted..they show you that the core can go higher and the cooler is rated to cool 400w compared to the 300w its producing now.
To be able to see benefits from overclocking this card you're gonna need a fast, like really fast CPU
It is done!
HD 5970 vs GTX 295/GTX285/HD5870/HD5850/HD4890 - single card stock & OC test
HD 5970 vs 2 x HD 5970 vs 1/2/3/4 x HD 5870 - multi-card test
HD 5970 - short air-cooling overclocking study from 725/1000 to 1000/1200MHz
Bonus - STALKER: Call Of Pripyat DX11 benchmark results in 2560x1600
All of these can be found in the test bellow.
ATI Radeon HD5970 – Ave Hemlock, morituri te salutant @ lab501 (Google Translator Required).
but people here say they have gotten theres from 900-1000core clocks..
and looking at the review posted here..its not the best ive seen..
they use stock i7 at 3.3 only 3gbs of ram and report a 1920x1440 res? idk even know how u do that on a native res screen....so ..ill see what some of the more enthusiast crowd can do..
if u remember..go look at any 4870x2 review..most people reported 780 core clocks...where most people can do 800-820 on stock voltage some a little higher..
so ill wait to see from Monstru for one..
PCI-e slots are getting too many these days, and we got a few extra, anyways.
But the Important buying-factor would be if this can deliver 75% of the performance 2x 5870. Specially at max OCed for 24/7 use, :eeeek:.
Does it deliver the 75% at the max OC too, you think?
If that's the case, that's not bad at all. One 5870 seams to cover the most on a 24" monitor, anyways.
5890 or 2 x 5870 is targeted at multi-monitors, or at least 30". For me, it is really interesting to see how the max 24/7 OC performance of these tow alternatives will become on 3x24" multi-monitors.
Eyefinity does not work on multi-gpu setups as of now AFAIK. EDIT: at least with crossfire enabled.
each card ran 6 displays..no xfire was enabled.
Yes, X-plane was the game shown on that (Linux) and it supports the screen spanning/bezel management and all things that eyefinity CURRENTLY does not through the game code, not drivers. It actually was 4 instances of the game, each one running one "viewport" spanned across 6 monitors.
o, those cheaters
when are these 5970 out to buy ?