i doubt that there will be cards with display port as the norm
Yeah i feel your sarcasm,but there is no need for it :). The Cypress GPU has double the specs of the RV770/790 so one doesn't have to be a design team member to roughly guess what its performance will be. There is a question will the GDDR5 used on it be a bottleneck or not,but the 5870 IS going to outperform 4870X2 or match it in some worst case. Also there is some talk that 5870 trails a bit or roughly matches the GTX295 ,which is roughly (surprise surprise) 25-30% faster than 4870X2 :p:
In case you didn't see this. I looks like fresh from the oven:
http://alienbabeltech.com/main/?p=11135
Some Highlights
According to our own sources, the new HD 5870 offers over 1600 Stream processors. Amazingly AMD doubled the number of SIMD units from 10 to 20. Still every SIMD unit contains 16 5D units and a Quad-TMU. Overall: 1600 stream processors and 80 TMUs. We are talking about a videocard whose core is at 850 MHz and whose 256-bit GDDR5 runs at 1200 MHz – all for the suggested retail of $399! AMD is expecting HD 5870 to come close to the performance of a HD 4870-X2 or GTX 295.
There is a new ERM eyefinity which means 3 LCDs can be simultaneously supported at 2560×1920, with options for future cards to support six LCDS!!
The die size is 330 MM2 and packed with over 2.1 billion transistors. This translates into one beast of a card with just over 150 GB/sec. It is very likely that the 8X + 8X PCIe CrossFire slots of the new p55 motherboards for Core i5 will become saturated.
What is outstanding is that we are hearing that the HD 5870 will perform at just over 26 watts at idle and peak below 190 watts maximum!! That is quite a challenge for Nvidia to meet or beat in their own upcoming GT 300 series.
Haha, there was no sarcasm in my post ! Just wishful thinking :up:
I am certain ATi would not leave such an obvious bottleneck unattended on their cards? After all, I'm sure the majority of their development teams have had much more experience designing video cards than most people on the forums...
Now on waiting mode for the benchies and the actual date for the hard launch. Which I think will happen also today, no?http://l.yimg.com/us.yimg.com/i/mesg/emoticons7/69.gif
They aren't air intakes. Air intake flows down from the top of the fan and gets pushed radially away from the axis of rotation. If those were air intakes the air would have to flow against the push of the fan.
Surely they are for decoration, or air outlets from the VRM section.
Speculating about the performance from this article is kinda pointless since he didn't mention what card in the current gen it is being compared to.
No that's not true at all. The bandwidth might be more than is necessary for the card. Excessive bandwidth won't make the card much faster.
Look at the reviews for overclocking cards out there such as the 8800GTX. Increasing memory speed gives far less performance increase than increasing the core or shaders.
The fact that ATI was able to use the same core across GDDR3 (4850) and GDDR5 (4870) suggests that the actual bandwidth needed for the processing power probably lay somewhere in between what the 4850 and 4870's bandwidths were. Doubling the processing power means it should still be fine with the memory bandwidth given
Oh and...
The biggest piece of evidence for me is that if the 5850 = $299, and 5870 =$399 is actually true, then that's a pretty big indicator of performance in relation to the current generation and the next generation. If a 5870 matches or beats the GTX295 at $399, that would be the perfect price to compete at and sell cards at the same time
lol. did you see the word theoretical. it depends on what you are doing but more bandwidth is generally better.
the reason you dont see a huge increase from ocing memory is because memory doesnt oc well. try cutting your memory speed in half and run a benchmark.
your 4850/4870 argument doesnt make any sense considering how much slower the 4850 is.
Hoping for some leaked benchmarks sometime tonight.
There's a guy at Anand saying he's got the card(5870) already, but points out the lack of drivers as of this time. http://forums.anandtech.com/messagev...IEWTMP=Linear&:shrug:
umm, when the cores are at identical speed... it's not THAT much slower.
http://img133.imageshack.us/img133/6...board01hd8.jpg
WOW around 5%(when both cards have the same core speed).... WOWOWOWOWOWOWOWOW...
admittedly the gddr5 is of higher latency but the difference isn't night and day....
and cutting that laggy gddr5 in half bandwidth wise, is only around a 25% hit... and it'd be less if latency wasn't so bad, probably around half since when comparing identically clocked cards the latency hit is around 13%....
The HD5870 sounds epic, especially if it's performance is comparable with that of a GTX295! I am not a fan of MultiGPU solutions at all, however if these new single GPU cards are as fast as the current top crop MultiGPU cards then we are talking huge performance increases here folks.
The power consumption (if 190W Max is true) is also most impressive.
Unless nVidia have a true Ace of Spades up their green sleeved suits I can see ATi winning this round.
Good to finally see display port being implemented in Graphics Cards... it would be interesting to see how this works on those new fancy Dell U series IPS screens
Roll on with the Reviews already!
John
Urgh. DisplayPort is not a good thing, folks. No quality improvements over HDMI or plain old DVI, some theoretical advantages in some areas that currently nothing takes advantage of, infested with DRM crap.
That being said, it's just a little bit silly to declare that you won't buy any cards with DP connections.
I don't get the hate for DisplayPort. Isn't it an open standard? There's no DRM, no royalty fee to pay for HDMI and has higher bandwidth compared to HDMI (and just a bit higher compared to DVI dual link).
Someone mentioned about latency, but googling doesn't help me find its relation to latency.
here's the weenie.Quote:
The new HD 5850 will launch on September 23 also. We hear it is priced below $299. HD 5850 will sport 1440 stream processors and it will have lower clockspeeds than its big brother. We are hearing somewhere around 700/1000 MHz and it can also display simultaneously on three LCDs at up to 2560×1600 resolution.
Well, how much damage does 160 SP less do? If it still is near HD-4870X2 (keep Crossfire limitations in mind) it will be a great card! 100 Dollar have to come from somehwere, otherwise noone would buy HD-5870.
http://www.xtremesystems.org/forums/...d.php?t=234155
Thefud has speaken. 1440 SP clocked at 725 MHz. The HD-5870 is at 850 MHz, so HD-5850 is clocked app 15 percent lower.
First rumored benchmark is Crysis:
HD5870 Crysis Benchmark Score
CPU: AMD Phenom II X4 955BE
Win 7 RTM
VGA: HD5870 1GB
Crysis 1900x1200 4AA +16 AF DX10 Very High
min: 30 .**
avg: 43 .**
max: 54 .**
http://www.chiphell.com/uploadfile/2...3093743260.jpg
http://www.chiphell.com/uploadfile/2...9111454394.jpg
http://74.125.159.132/translate_c?hl...ivsFsnZPqEd56Q
TPU 4890 Xfire review:
CPU: Intel Core i7 920 @ 3.8 GHz
Software: Windows Vista SP1
Drivers: NVIDIA: ForceWare 181.20, GTS 250: 182.06, GTX 275: 185.63
ATI: Catalyst 9.1, HD 4890: 8.592.1
http://tpucdn.com/reviews/ATI/HD_489..._1920_1200.gif
(dont know how w1zzard gets his ati products to run so smoothly ;P)
http://www.techpowerup.com/reviews/A...ossFire/6.html
Well looks like it is faster than 4890 Crossfire and GTX295 :eek:
:lsfight:
http://www.fudzilla.com/content/view/15436/1/
Quote:
We finally figured out out the final specification of the chip that we called RV870 and the fact that AMD plans to call it Radeon HD 5870 doesn’t come as a big surprise. The chip works at 825MHz and has 1600 shaders, two times more than RV770 which indicates that the chip is two times faster than the year old RV770.
The chip has as mmany as 2.1 billion transistors and is more than twice the number the RV770 packs, which has 956 million transistors. The card uses GDDR5 memory clocked at 1.3GHz (5.2GHz in quad mode) and can provide more than 150GB/second bandwidth. The power of this card stays at 180W while in idle the power drops down to 27W, three times less than the 90W on 4870.
By a rough specification-based estimate, the Radeon HD 5870 could end up two times faster than the Radeon HD 4870 but realistically, you should expect the new card to be faster by about 60 percent across the board.
If it is an open standard, adding another port wouldn't cost much. Like largon said in the thread you made, there are adaptors, which would nullify your point bolded in the quote. They're not electrically compatible but adaptors still exist.
http://www.monoprice.com/products/su...04&cp_id=10428
xvYCC support: When we have such capable monitors, I do think we would also have DisplayPort support it.
Well here's my conclusion: DisplayPort will not affect anything that exists today, why not embrace it?
http://en.wikipedia.org/wiki/Display...tages_over_DVI
Here's a few advantages. It really needs a citation link though.
http://www.edn.com/article/CA6594089.html
The only problem I see is HDMI/DVI to DisplayPort, which isn't mentioned. Then again, if that ever happen, that means all TVs, all computer monitors by that time are using DisplayPorts.
Crysis in full detail isn't a dream anymore :shocked:
Did their memory management got more efficient? IIRC, 4xAA kills memory (though I admit, I've never tried 4x AA on Crysis myself).
Get the :banana::banana::banana::banana: out of here! If those numbers are true I'm picking one of these up as soon as possible regardless of price. That and 9700pro part 2. Might just be abnormaly good at crysis though.
Damn, I hope those numbers are true.
The card, rocks.
:kissbutt::owned:
another comparison from that thread:
http://img5.pcpop.com/ArticleImages/.../001209209.jpg
PhII 955 + HD5870 1GB: 43FPS
It appears it is literally two perfectly scaling 4890's in a dual core. woah.
However, on the other hand, here is another perspective:
http://pctuning.tyden.cz/ilustrace3/...ars/crys_2.png
edit: here is one more from THG:
http://media.bestofmicro.com/W/3/176...5_chart-04.png
According to this, 4870x2 < GTX295 < GTX280 sli < 5870 1GB OC < GTX280 tri-sli < 4870x2 Quadfire < GTX295 quad sli
I think you're missing the point. You can convert DP or MiniDP to DVI, sure, meaning you can connect a monitor to it no probs. But because of the DRM, you can't play back HD content on that monitor, even if the monitor is perfectly capable of doing so. If DP wasn't an option, HD content marketed at PC users would have to be DRM-free, because our current standards of DVI and VGA do not support HDCP. As for colour space, I don't think you can just add support at a later stage, but I could be wrong on that point, not being an expert on such things.
DP has no advantages for the current market and will continue to have no advantages for the next few years at least. Pushing DP is all about content control and rights restriction, nothing more - it gives the end-user no benefit and there is no compelling reason for us to switch to it until HD media is regularly produced and available in resolutions above 2560*1600. And by that point I sincerely hope we've all abandoned this ridiculous screw-the-consumer DRM bullcrap.
super vga was the best. better than dual dvi link ...
Will you guys take it somewhere else with the displayport crap?
So 5850 is 1440SP + 725MHz = 2.08 TFlops... and 5870 is 1600SP + 850 (or 825?) MHz = 2.6-2.7TFlops?
Wow, I can't even imagine what a 5870X2 would do.
As far as memory management goes... it appears that ATI has been better at memory management than the Nvidia equivalent. Take a look at benches of G92 chip cards with increased AA or increased memory vs. RV770 cards...
Crysis or Crysis Warhead. Not the same benchmark, not the same results, and between maps there is a huge difference; those numbers
don't mean anything without more info about what benchmark was used; the fact that they refer to "DX10 Very High" means it's Crysis, not Crysis Warhead, so comparing the results to Crysis Warhead is apples vs oranges.Quote:
Crysis 1900x1200 4AA +16 AF DX10 Very High
min: 30 .**
avg: 43 .**
max: 54 .**
Without info on what map it was; if it was a fly-by or not etc... no conclusion can be drawn from this; an average of 43 fps at 1920x1200 4xAA/16xAF can be had with a slightly OC'ed HD 4890 in Crysis... depending on the map/benchmark.
well yes, we understand this. thats why I've posted random 1920 4xaa benchmarks from 4 differnent sites. No telling the map or if it's regular/warhead, or if its 32/64 bit, 2gb walmart ram, 12gb domintators, etc etc. it's just to give you a relative idea, since theres nothing to compare it to ....
no, you posted random Crysis Warhead benchmarks. not the same :)
if a standard "demo" was used for that 43 avg, it would mean the HD 5870 will end up close to 2x as fast as GTX 280 and HD 4870 X2.
If those numbers are from the first level and the ice level, then this card is golden. The flyby intro I think is the most brutal part of Crysis and even dual GPU can't maintain > 30fps. With a pair of SSC GTX285s, i7 @ 3.8ghz and X25-E SSD, during the flyby I was hitting lows of 27-28fps.
As far as the beginning of the thread, if it's true, I hope the 25-40% increased performance number does not refer to the 4890 though, because then you are talking about the same speed as an overclocked GTX285.
Lol even if this game does conquer Crysis it's been what--3 years? Crysis has the title as the longest running game to totally devastate high end PC gaming, lol. Crysis Warhead and Crysis are different though, IIRC the original Crysis is slightly more demanding by ~5%.
You can convert DP to HDMI. From what I see, they use packet-based delivery system, which probably would allow addition of extra colour gamut easily (I'm no expert nonetheless).
I don't get it, DP can implement HDCP (and they did), so what's there about incompatibility? They even mentioned backward compatibility as one of their main/important/strong points.
dang, need moe money to get screens :D
Finally gaming as it was meant to be played on ati cards!
I wish they post a close-up view of the card running. It's there! Not in a case, c'mon just a little closer:ROTF::rofl:
Dang. Did noone bring a mirror-shot and a tele-zoom to one of the presentations? I'm getting sick and tired of this blurred shots taken with mobiles...
Here is a repost - everyone is saying, "Dual-core" regarding 5870 :D:
http://www.xtremesystems.org/forums/...&postcount=174Quote:
Dual core / MCM: This generation supports a new "hard" method of dual gpu rendering that differs from previous Crossfire implementation that we see R800 and in rv870 crossfire situations. The chinese translation is "dual core" or split frame hardware- Has something to do with the way the shaders/alus etc operates inside an individual gpu (a method of simultaneous operations in the hardware, similar to dual core, yet not actually two dies in one package ala MCM). Perhaps there is no more real-time compiler in the driver and its all handled on the hardware level by the scheduler. Because the core of the chip is so modular & scalable with certain areas sharing parts of the die (ROPS + Memory controller logic), you are able to divide the specs in half (1600/256/80/32 to 800/128/40/16) and have two parts (rv870/rv830) and rv870 appears as two rv830's, yet it is still only one die. Hence term "dual-core" - more like "modular". Some people think, yes, but all GPUs are multi-core because each of the shaders is like a core by itself. Well, here we are dealing with two large arrays of 800 shaders, along with other standalone logic that communicates with either one of the two identical arrays. More specifically, one Rv870 die is composed of two Rv830-like 160x5 clusters/shader arrays (like two current-gen revamped rv770's in one die) sharing certain features, but connected via "internal crossfire" working in unison and the entire design is a continuation of R600 architecture. It is load balanced, efficient, and requires no crossfire software optimization (because it is hardware level communication); it works via SFR 8*8-32*32, and is bandwidth intensive. The board is using next-gen 5ghz Gddr5 to provide the required bandwidth. So, apparently they've slapped together two 40nm rv770's... so it's easy to see where "dual-core" confusion comes from. Cypress is like a 40nm 4890x2 in one die! -So, it is like a "native dual core" CPU. Now that specs are known for Cypress & Juniper (rv870 & rv830) you can expect that the remaining parts Cedar & Redwood (rv840 & rv810) are 3/4 Cypress & 1/4 Cypress respectively, and that Hemlock (r800) is 2 x Cypress in the same fashion as the HD4870x2 on a single PCB. So, Cypress is like a larabee, except that it uses 2 rv770 cores, whereas larabee is using several P54C cores. (Even though this isn't an actual rv870 die shot, rather an artist's rendition posted earlier - when you do see an actual one, it will look more like the first two images ie. a symmetrical reflection of two identical core areas over a center axis, rather than the third picture - which is an actual die shot of rv770. Notice in the 3rd pic that the rv770 is asymmetrical by design, not resembling a dual core architecture.
I wrote this a few weeks ago about the rv870 core based off what i was reading about it. In the bolded part, the report I read said that it was "200% efficient" in scaling between the two cores because of zero overhead.
:rofl: :ROTF: :clap:
LOOK!
http://tweakers.net/ext/f/cAZ7FbulDZ...4yrdM/full.jpg
:shrug:
http://gathering.tweakers.net/forum/...ges/1365484/30
sure it's photoshop ^^
I think they are air opening, if these cards are put in CF, especially with the backpalte they would be way to close and the card would need another method of getting air into the fan.. the rear ports, this is albiet of the GTX280 and it's rear port for TRI-SLI mode :up:
http://techreport.com/r.x/geforce-98...ri-sli-rig.jpg
For sure, lets just cross our fingers and HOPE im wrong :D
They can't be. The physics of the fan enclosure & rotation don't allow it. The blades shoot air in all directions and input comes from the center....
I tell you what would look sick in my case is this (in sli on power of 3 platform):
http://farm4.static.flickr.com/3277/...12bdb7.jpg?v=0
I just need a PSU with 11 8-pin connectors, and I'm golden.
:ROTF:
Nvidia is suppose to show demos today aint thee?
GL with that ;)
I'm sure if the blower (intake) was obstructed it would start to feed from the rear ports. Can someone please get a GTX280 and cover the FAN with a piece of paper and observe the result from the rear ports? :shocked:
Not even a FTW edition :(
The last article with Eyeinfinity has Dirt2 (DX11) running :X
I would love more photos of that.
no those numbers are NOT from Crysis Warhead....Quote:
If those numbers are from the first level and the ice level, then this card is golden.
Techradar has more info on EyeFi, and it might just be the tech that gives consumers a new way of thinking about visual computing. Very very cool stuff
http://www.techradar.com/news/gaming...-review-634244
nice article, im really interested in non standard setups for 3 monitors on gaming, which they didnt cover. like i have a 26" and i was thinking about getting some cheap 17-19" and rotate them to be vertical and put them on the left and right side. but i think this is a long ways away from happening.
http://img35.imageshack.us/img35/8683/003ix.jpg
http://img188.imageshack.us/img188/279/004scy.jpg
http://img32.imageshack.us/img32/2466/005mq.jpg
http://img188.imageshack.us/img188/6342/007ns.jpg
Look at third slide, there is regular HD5870 .... that leaked photos are reatl Retail card cooler ... but it is strange.
And it still can't play crysis at a good fps. Hopefully X2 can.
turn off DX10 VH mode ;)
use a nice CCC config http://www.madshrimps.be/gotoartik.php?articID=850
and looks like my mouse :rofl:
http://i26.tinypic.com/ao2oac.jpg
Article removed ... :(
...but I did copypasta..
Quote:
AMD is looking to expand your horizons with its Eyefinity technology, allowing you to run up to six monitors in HD from a single graphics card – and TechRadar has had the chance to play with the latest in graphics tech.
Although you'll have to wait and see how ATI is bringing the technology to our homes, Eyefinity is close to release and looking rather stunning.
Essentially, the technology allows you to run multiple monitors in high definition from your graphics card – and TechRadar was at the top-secret launch event to test out whether it's merely a gimmick, or if it's really a game changer for the company.
Although it is working hard on getting partners to provide the kind of monitor hardware necessary for the practical necessities of sticking six monitors together, AMD seems aware that the majority of us will not be forking out for a half dozen panels just yet.
But, as we flew an aeroplane through stunning vistas (not the OS) with our peripheral vision taken up with screen and not wall, we have to confess that the high definition multiple monitors were certainly helping us feel more involved in things.
Even with the rather sexy specially supplied Samsung monitors that sported much thinner bezels, the black lines were, of course, noticeable, but it was amazing just how quickly your eyes start discounting the edges of the screens.
The power of the technology that supplied Eyefinity was clear – this was a meaty rig indeed to provide stutter-free six monitor action (6 x 2,560x1,600 resolution, in fact) but it wasn't so pricey that it would be beyond the means of an enthusiast gamer.
Stick six 23-inch monitors in the mix, however, and you'd be looking for seriously deep pockets.
Still, as a concept, it was pretty damn cool, and we bore that in mind when we moved over to a more feasible three monitor setup – which will be available earlier than the six-screen behemoth.
Now we should point out that multiple monitors are nothing new, but in gaming terms getting a stable gaming experience while using the setup has been problematic.
Eyefinity (and the surrounding tech) changes that, and in spectacular style.
On the three monitors, the gaming experience was, in all honesty, not significantly worse than the six monitor set-up – and less likely to be pie in the sky for Mr average income.
Your peripheral vision extends far wider than it does vertically – and with the focus on the middle monitor, the side monitors gloriously plied our eyes with extra information without detracting from the gameplay.
The game being featured on this rig was Left4Dead, and it was certainly an advantage to be able to sense further around ourselves. It literally expanded our horizons in terms of gameplay – and, as a nice little added bonus – it made it much nicer to spectate.
Apparently many games are perfectly capable of taking advantage of the ridiculously large field of vision, because they take their maximum resolutions from what the graphics card tells them.
Because EyeFinity allows you to essentially treat the entire surface of your monitors as a single resolution, you simply choose what you are offered and the game adapts – giving you glorious action.
AMD was at pains to point out that this isn't applicable to all games, but an extensive list was shown including major first person shooters like Half Life 2, Crysis, and Far Cry 2, real-time strategy games, flight sims and so on that could run well on EyeFinity setups.
Of course, multiple monitors have uses outside of gaming – and EyeFinity allows you to set up the monitors in multiple configurations – with some portrait and other landscape, in an inverted 'T' with four monitors or in an 'L' shape for instance.
This, of course, boosts productivity and, for people who need multiple programmes running at the same time (AMD's example was city traders), and have the computers that can cope, this will prove to be a major boon.
We also asked AMD if the EyeFinity tech could cope with monitors with different resolutions and sizes, and received an affirmative – which means that you could begin to add monitors as and when you like, including re-using old ones.
Plus you can clone and span monitors to your heart's content, or even group screens together.
It's pretty damn cool, especially considering that it is close to a public release, and, although you might not be forking out for a six screen setup straight away, we can see the three monitor configuration gaining some traction.
Check my previous post :).
thanks for the copy
this part is not special, i do this already at work with one monitor vertical and one horizontal and they can be aligned anyway i wish. vista/win7 are pretty good at these kinds of things (but win7 dosnt cause the screens to flicker a dozen times as the gpu tries to get everything lined up.)Quote:
Of course, multiple monitors have uses outside of gaming – and EyeFinity allows you to set up the monitors in multiple configurations – with some portrait and other landscape, in an inverted 'T' with four monitors or in an 'L' shape for instance.
43 FPS Average at those settings in Crysis is a very encouraging result.
I'm waiting to see how the 5850 does.
Perkam
Man I don't want to see this eyefinity crap. Who the hell even games with 3 monitors anyway? I want specs, benchmarks, pricing, availability, pictures with cooler removed, etc...
Patience young Skywalker,soon :).
I do ;)
The decision to pull this one was hard, but the reason has nothing to do with the credibility of the source. My phone rang in the middle of the night and a "nervous" AMD representative was on the other line. We're not under NDA, mainly because they decided to send Swedish IDG (which is really lame) instead of us to the London event, and when we explained that if they don't simply give us the information, we will find out anyway, and share it to others who are perhaps not meant to know before embargo lifts. After some talks, we came to an agreement. I was a bit surprised they reacted this way know, since we also shared quite a lot before the launch of RV770, but he simply said that there were things we had said that he didn't even know and had to look up.
The exact details of the details will stay with me though ;)
Everything in the post is true, and you can read it in the first post of this thread, and many other places around the web, but not at NH anymore ;)
//Andreas
So what time is this big announcement from Ati ?
I think the 8 (London) and today's press events were just about eyefinity (thus the pictures).
Many reviewers already have the card but the drivers will be handed next monday, so any number up to then seems untrusty.
Regarding reviews, 22 (23?) september seems to be the date
Seems reasonable enough. And I can't believe they sent IDG instead, that is lame.
About Eyefinity - that's one of the coolest new features and with the supposed performance of these cards it seems possible to actually make use of at least three screens. To bad I'm on a student loan, 800$ a month isn't exactly much...
Finally some consumer tech that makes 3 monitor possible!
/awaits leaks
F*uck Eyefinity that brings nothing new to the table,I want benchmarks,specs and prices!I thought that's what we would get today,and also isn't Nvidia scheduled to release some infos in the same day ATi did?
I congratulate ATi for making triple monitor solutions affordable (on a support basis) but I wanted some perf numbers and why not some DX11 luv.
@marten_larsson-the ability to see those things resides in FOV and not adding more monitors,though I agree with others on the immersive part.
No, it doesn't. To get the same experience I get with three 21" monitors next to eachother I would need a 50" screen at the same distance. But the resolution would be 1920x1080 or 2560x1600 which would be crappy to say the least compared to three displays at 1680x1050 (since I can't afford better screens), that's 5040x1050... It's impossible to increase FOV without giving stuff strange proportions...
I'm considering a 5870x2 at some point eventually hmmm...
Maybe the 5850 will have a sweet pencil mod though........
Can't wait till these things get released.
I think 6 monitors might be obtained by daisy-chaining some LCDs via the displayport connection.
wha no benchs ati gets lame
The speculation party is now going to turn to a whining party because people got their speculation wrong and believers are now pissed, same funny stuff each time :ROTF::rofl: