Where's mah cookie?
;)
Printable View
i don't know if this has been said yet, or not. but those new clear pictures have the vGPU controlled by a Volterra voltage controller.
vGPU softmods out of the gate.
which. IMO. is SWEET! :up:
hahah!
Well we have to wait for a review to say if the two groups communicate via sfr or not...so it's like the two groups of 800 could just as well have been one large group of 1600, and the thing would operate just the same, just that it was easier to start with two rv770 style groups and work from there? Theres nothing special about the diagram having dual rasterizers and heirachial z?
After anand's review ill likely be sending u cookie :)
Hey, that last picture about the "sweet spot strategy" and the market segments for each chip is wonderfully pretty with all those chips images and so, but what I read there is:
Hemlock => 400-500$
Cypress => 250-350$
Juniper => 120-200$
Redwood/Cedar => <120$
What we know right now about Cypress prices is 300-450$, so what are AMD trying to say with this slide? That these former prices they have said are actually inflated because of the novelty? :ROTF:
As I read those numbers I thought to myself it has been a long time since I last saw a new gfx card from a competitor win in absolutely every single game tested. Until I saw wolfenstien numbers, then I was able to relax :)
Awesome! A few more driver revisions it will hopefully win in that benchmark too.
Thanks for sharing.
Goodbye image flickering due to memory freq change á la RV770/RV740/RV790.Quote:
Originally Posted by slides
I spy a vt1165 vgpu regulator control chip, so rivatuner and others should already be able to adjust gpu voltage :D.
Nvidia's shaders are spread around the die too. Just because it's not all clumped together like in RV770 doesn't mean anything :rofl:
And yes there's nothing special about multiple rasterizers (well besides us finally getting them). Just like there's nothing special about having multiple ROPs or TMUs. Can we please let the dual-core rumours die? :( RV870 is a single-die, no SFR or AFR crapping monster.
$350 is a sweet price for the HD5870 considering how much the GTX 295 costs now, but I'm sure the price will be up a little more like when the previous gen first debut. I assume this is the price for the 1GB version, the 2GB might cost $70-$100 more. Much excited but I guess I still hold my breath and see what Nvidia has up their sleeves. Hopefully they're gonna open a big can of whoopass instead of a huge grain of salt. :ROTF:
Whoever said multi monitor support isn't important are fooling themselves. If the multiple head displays did d3d today then I wouldn't have had to buy that Matrox Triple Head 2 Go a few years ago...
Gaming with 57" on screen real estate is a whole new experience. I play primarily FPS titles like cs source. Heck the 3 screen setup I'm using now carried me through several seasons of CAL IM back in the day... Peripheral vision in games is just something to behold and I wouldn't give it up for anything! :up:
Of course you need the hardware to drive the displays... I get sad when I have to go back to a single display because it runs badly or doesn't support 16x9. Crossfire or SLI is a requirement for my system.
So I hope ATi does have some cards that can accelerate more than one display. It will open up the market to a whole new generation of gamers. My question is can these cards do multi monitor with CF enabled? I'll keep using my TH2go until then... :eek:
Well I still think the shader group units are semi-independent and communicate with each other to render images exactly how two physically separate modern gpus render in sli/crossfire. There was so much talk of this on the asian BBS's I don't see how they could have just made it up and ran with it. It could all be wrong, but I still want to read a review explaining the rendering process. Apparently to you guys, largon & trinibwoy, it is no different than g80/g200. 2 weeks and then i can send cookie, trini you get nothing but a good laugh.
:)
edit - chopped up image compared 5850 to 5870... 81% to 100%. How to unlock the missing 160 shaders?!?!?!?! :wasntme:
http://www.pctunerup.com/up//results...850%205870.png
http://www.hwupgrade.it/forum/showpo...ostcount=10248
Guys, is it possible that those 160 shaders help with eyefinity and AA?
And it will be cheaper than MSRP when it hits the street, that's for sure, just like it was with any previous generation.
In other words a $500-550 6870x2 will compete with ~$480 GTX295...
Well, that's a stretch but I expect 5870 to be around $360 on the streets so it's definitely a good deal. :DQuote:
But already the $399 5870 1gb single gpu card is destroying the $499 gtx295...
The ridiculous thing was that you guys called "only" 10% faster (5870>GTX295) when it's not even in the same class.Quote:
So what are you saying? What is rediculous about what is happening? :shrug:
What in the world are you talking about? Every SIMD in every GPU is independent. Your're basing your theory on the fact that AMD showed two separate groups in an architecture diagram? :confused: So I guess this diagram makes you think there are 20 different GPUs rendering in crossfire? :rofl:
http://i29.tinypic.com/2hnown6.png
Thats some ridiculous scaling. Forget the 5870. If the 5870x2 is only 150 more, then it is the card to get. Its more than double the performance of the 5850 and less than double the price. I am basing this off the performance of the 5870x2 comparison.
For the most part the gtx 295 when 4x aa is used appears to be faster than the 5870. However at 8x it loses alot of times. At that performance, I can see NV next generation part being much much faster. It will still lose to the 5870x2 I think but I can see it being 50% percent faster than a single 5870(it will lose to a x2 because the 5870x2 appears to have 100% scaling).
The reason I can see it being 50% faster is because the gtx295 has really conservative clocks on both the shader end and its main clock as well. If the 510 shader part is true and it is indeed a new generation, then I can easily see it 50% percent faster than a 5870.
At that point, NV definitely has a chance because it give NV the opportunity for them to release a x2 part on their own that beats the 5870x2 by a rather significant margin. If NV is able to catch the single and dual gpu crown within 4 months of the 5870 release, then it could win this generation.
Alot of speculation, but the 5870 performane gives alot of room for this to happen.
Since the House passed 100 to summarize it:
We all know that RV870 is a native dual-core, MEM share, TMU sharing, etc. These hardware innovations is just the beginning, or rather in addition to hardware of the new technology, RV870 brings us the updated software in-depth , for example, CrossFire mode, specifically can refer to the savage major technical paper, I did not explain the response to this (http://bbs.chiphell.com/viewthre ... & extra = page% 3D1)
RV870 by native dual-core hardware support, the biggest benefit is the CF model can be evolved from today's AFR for a more reasonable SFR (do not understand what AFR and SFR for us to look savage Great article,), SFR and thoroughly get rid of the PCI-E bandwidth bottlenecks leading to CF or SLI can not use the SFR of the constraints, CF no longer rely on driving on the optimization of 1 +1 is infinitely close to two and said, where everyone will be RV870 native dual-core architecture of the deep deep impress, at least in my case, native dual-core is my first time in years, felt playing card graphics technology, a major step forward.
Go back to Crysis on the results, in fact, the RV870's Crysis performance can be regarded as a strong reason why the bounds of reason, but it can be regarded as lucky, because to say whether it is before the CF or SLI, the efficiency of both in Crysis is not high, RV870 just drill the use of technology, this SFR loopholes and completed card crisis, the role of Terminator, but I hope that everyone calmly RV870 performance, in other CF or SLI more efficient GAME in, RV870 impossible to think about Crysis so lucky, though it in terms of the total, RV870 architecture as pioneered the use of dual-core technology, combined with the many part of the shared framework, the core DIE or the ratio of power consumption and performance will be the same Performance Products the highest, which is beyond doubt.
Lao Lao hastily said so much, would have wanted to test them on the RV870 article says, but I got fascinated by the RV870 architecture could not resist spraying a bit out, more content or waiting for me in the future release of the RV870 test, let us share the bar.
Quietly asked a small problem, AFR to SFR does not seem to call it evolution, can only be called after the performance to a certain extent the development of an inevitable trend of bar
CHIP has the original "Dual" Processor Unit, each PU contains 160 VLIW (vulgar understanding of the words Where there are two RV770 form a RV870), but certainly not in now with a simple package prices
Where CrossFireX decker SLI are AFR (cross-frame rendering), image processing, 2 GPU Where】 【rotation exchange working, efficiency relates to the driver to complete a few quick, the error can not compensate for delay
RV870 with generous SFR (split-frame rendering), image processing, professionals, all know that playing a three-color RGB images must be completed, SFR Where Through 1FPS on screen divided into a number of 8 * 8-32 * 32Pixel generous detailed map between the to carry out the same color with the nuclear deal as long as the Bandwidth咁sample is large enough, you can achieve the same RV770 generous GPUClock case is equal to +200% efficiency.
(SFR consumption on the heaviest Where Bandwidth, RV870XT will spend the world's fastest generous GDDR5)
The above messages are from the Mainland China's internal sources AMD
Praytell why the hell would ATI make a "native dual core"? That doesn't even make sense with GPUs. If they are going to CF two GPUs on one die, they would be better off just making it one GPU. It'd be faster with less overhead and wouldn't cost any more to manufacturer.
Basically instead of making a mega cores with 1600 SP you make two 800 SP cores on the same die with fast inter connects to support SFR. This way you can concentrate on refining process tech for the lower trans count 800 SP cores. It will not be as fast as a native 1600 SP chip but it will be easier to produce. And it appears that the cores share a single memory controller. So I guess instead of calling it dual core it may be better to call it dual shader mega clusters.
http://i28.tinypic.com/fduic7.png
FINALLY!
The bad news is that probably the low idle power consumption is because the card clocks the memory down a lot, 100-300MHz, not because ATI has done significant improvements in the memory controllers. We'll see.
To me it seems 5xx is a much of victory to AMD as 4xxx was , it shows some steady growth and it will be a victory in the market as well for few month afterwards only time will tell , but Green has been producing huge chips with alot proprietary adds and ends , no matter what anyone says and they had to learn that lesson with 4xxx and the chances are all that we will see is a shrink with some minor improvements .As to someone saying that NV is being quite , no its not they been spewing propaganda ever since 5xx talk began .As for Eyefinity i really dont care as long as it doesn't take up to much die space .
Edit : ohh yeah "Voltera" voltage controllers are indeed sweet :D
I wanna see 5870 X2 to the gtx380 (or even gtx395). Which ever wins (most likely 5870x2) will be my next buy.
I'm not counting on the gtx395 though... if it does get released... it will get released months after words like the gtx295 and will cost significantly more
so my next buy is looking to be a 5870 x2 :) massive step up from my gtx285 if these benchs are true
hopefully won't be too bottlenecked on a q9550 @ 4 ghz... anyone think it will be?
Hey Guys, I just wanted to give me two cents and also get your opintion on my future upgrade. These news card do sound really good. I am currently running nVidia 8800GT and I wanted to know what Version of the Card would you guys recommend me upgrading too. I am a picky person. I do play gaming on my PC, Blu-Ray, and other Cool Stuff.
I also wanted to ask that I was playing NFS Most Wanted on my 360 and there were alot more Graphics then I have currently on my PC. I play the same game on my PC and there are a lot Graphic Elements that are missing. Does this means that my Graphic Card isn't as powerful as my 360. Your Thougts Please?
Josh
i would just keep waiting out, prices will go down fast in the next 4 months. as long as there isnt a game you cant enjoy with new hardware, dont upgrade.Quote:
Hey Guys, I just wanted to give me two cents and also get your opintion on my future upgrade. These news card do sound really good. I am currently running nVidia 8800GT and I wanted to know what Version of the Card would you guys recommend me upgrading too. I am a picky person. I do play gaming on my PC, Blu-Ray, and other Cool Stuff.
I also wanted to ask that I was playing NFS Most Wanted on my 360 and there were alot more Graphics then I have currently on my PC. I play the same game on my PC and there are a lot Graphic Elements that are missing. Does this means that my Graphic Card isn't as powerful as my 360. Your Thougts Please?
Josh
If I remember correctly then the Xenos GPU of the 360 is as powerful as a 7800/7900 series so obviously your 8800GT is better. Judging from the specs from your rig I think you're being held by that video card and I would say it's time to retire it to upgrade. Depends on what resolution you game at which I assume 1900x1200, a HD4890 or GTX 275 should work just fine or better yet, wait for these HD58xx series coming out next weeks. Either way, you'll see a big improvement over that 8800GT.
Jesus, stop quoting pics!
Hopefully ATI will announce an insane HD5890 and someone can put 4 GPU's into one PCB then use four of these on a sixteen-crossfire setup and drive 96 screens, each on 1900x1200 res.
:) - Wishful stupid thinking.
With all due respect, that's quite an assumption that all game companies out there want to code their games like Crysis.
Truth of the matter is, different companies code things differently - and yes, while Crysis is visually stunning, that doesn't mean it was coded properly for the hardware
Who's to say that Crytek didn't realize they made a huge mistake and are changing things for the path that video cards and API standards are going? Keep in mind that at the time CryEngine2 was being developed (and keep in mind, these things are laid out well before they get released), the industry also wasn't sure how DX10 and unified shaders would go, how video card and fab technology would work, etc. For all we know, since we do not know how the internal workings of Crytek goes (and they wouldn't admit it anyways), they gambled on the wrong architecture path or direction that DX went off on.
Looking at the demo of CryEngine3 on the EyeInfinity - the fact that CryEngine3 is already running on large resolutions in what looks to be pretty good details, who's to say that Crytek didn't realize their deficiencies and have now optimized their new engine for what direction hardware is going on now?
Crysis might be the most visually stunning game right now, but that does not mean it is the future direction of video game design (it could be, but I'm saying that it's a big logical assumption due to the limited knowledge we have of how it was actually coded/optimized to work).
As for the rest of the benchmarks out there...
Can people friggin' wait for some real world benches? When the 48xx series was released, the cards were being compared to the 9800 GTX and 8800GT and people were saying "Wow, 48xx is such a dissapointment, it's only being benched against the 9800GTX and 8800GT." - Then real world benches came out and showed that they were too close for comfort for the GT200 cards
Definitely, but the 8800gt should be stronger than an XboX. As for the upgrade card, depends on what resolution you plan on gaming at... 5870 might be overkill if you only game at 1680x1050 in which case just wait for the release and the prices of 4890's or gtx 275/285s to drop like a brick :)
Well from these slides we would be expecting a ~35-38% increase in framerate compared to the GTX285 in regards to Crysis 1080p Very High (no AA/AF).
So that is pretty much 4870X2 numbers--minimum framerate probably around 19-20, average around 35 (on an i7 @ 4ghz).
It's an improvement but if your idea is gaming maxed out in Crysis you may want to wait on GT300. Although to be honest, if GT300 is to 5870 as GTX285 is to 4890, Crysis sans AA/AF still has unacceptable performance this generation (single GPU wise). You may be looking at a "FTW" edition refresh of GT300 before we see a single GPU that has a minimum framerate above 30fps.
Also depends on how late GT300 is to the party. History hasn't been kind to the card that comes out very late...
Has anyone seen these numbers:
http://www.madshrimps.be/vbulletin/f...hit-web-66284/
Oh, pages are moving quickly then.
No, it`s just a Dual Rasterizer, not Dual Core. No SFR. The Rasterizer was a limiting unit in the last generation, so they doubled it. Especially Tesselation needs a stronger Rasterizer. Maybe we`ll see a similar approach in GT300, the GT200 Rasterizer is also a limiting part.
http://img12.imageshack.us/img12/801/58it1.png
dual setup battle yay
Thank you for your input. I had a crazy feeling that it was my video card. I have been a Big nvidia Fan, but I think I will give a try to ATI for once and see how things are. It was crazy, because I enjoy playing games on PC, but when I see better Graphic on the 360, it makes me go Banana. Well I will wait for the official ATI Release and see whats 58xx Fits my need.
My Graphic Card Usage:
I have a 22" LCD now, but I am planning to get me a TV 42" that I will hookup my computer too. So Your Thoughts on This? Samsung TV Of Course. Can Recommendationa?
For now I will be playing games on 22" LCD, but future Bigger Screen.
Josh:welcome:
If the 5870X2 really is 2x 5870 (nothing downclocked etc.) then it's going to be a pretty beastly card
If you're indeed moving to a 1080p screen, then definitely go for a 5870 or if you're going to wait, any of the higher end current gen cards (at a cheap price, that is) would be a good get to hold you over. Go for at least 1GB. A 2GB 5870 would last a while though!
wonder how much better cf would perform on X58 platform. :)
Grr, why isn't anyone benching GTA IV? That's one of the more demanding games, even if it doesn't look the part.
OMFG, do want!
http://i162.photobucket.com/albums/t...t/th_in1ay.jpg
http://i162.photobucket.com/albums/t...tt/th_in1h.jpg
http://i162.photobucket.com/albums/t...ytt/th_in2.jpg
http://i162.photobucket.com/albums/t...tt/th_in3b.jpg
http://i162.photobucket.com/albums/t...ytt/th_in4.jpg
http://i162.photobucket.com/albums/t...ytt/th_in5.jpg
http://i162.photobucket.com/albums/t...tt/th_in6z.jpg
http://i162.photobucket.com/albums/t...tt/th_in7m.jpg
http://i162.photobucket.com/albums/t...ytt/th_pn8.jpg
Thanks to D. for those pictures.
I'll ask again - what in the world are you talking about? Moving stuff around on a die makes it easier to manufacture? So you think because there's space between the two groupings in the diagram yields are higher? Sorry, but it seems you're a bit out of your depth on this one :)
Hint: There's absolutely no technical reason to have all the shader clusters in a GPU physically next to each other in order for them to "communicate". Which is laughable since shader clusters don't communicate with each other even if they're physically nearby.
Btw, based on your logic GT200 has quad shader mega clusters :rofl:
The 3DMark Vantage numbers were done with an i7... if you go back to the source it tells you somewhere... not sure if those benches were done by AMD or a 3rd party since the source was in another language but I wouldn't be surprised if AMD used Intel CPUs to get the best numbers like they did before. :D
This thread is now 5th in the all-time post count in the news section :clap: Although, two of the threads that beat it were the 2900 and the "9900GTX" threads...a bad omen?
IIRC the ATI 4xxx series one was even higher than those and that did quite well I'd say ;)
why couldnt they use one of their 24 core 2 socket mobos, those have a PCIE slot right?
Will these be the best when gulftown comes out?
I am waiting on gulftown so I can go 6 core on my next build.
I said it before and I'll say it again. Ati really should of made that exhaust vent bigger. I can totally see a simple easy fix where the current vent slots are. If they would have just extended the vents down to the HDMI and display ports it would of been a perfect exhaust. I guess Ill just have to mod it myself, time to bust out the dremel from hell.:up:
Some guy said 5870x2 will be around $500 - how's that gonna happen when 5870 itself is about $400?
Hmm. So 5870x2 will be out by October or November?
Maybe I'll wait for it.
Thats what im saying. It is such a easy fix too, look at all that room below the vents. All they need to do is extend the vents down and BAM problem solved and everyone is happy. That would double the vented area without any changes to the PCB.:rocker:
Lets see how hot these puppys get at 1ghz overclocked!
Maybe a additional fan module that sticks outside the case on the cards vent that pulls air would be a great fit, effectively doubling airflow without actually increasing fan speeds and noise.
Literally all previous ATI cards hit the streets below MSRP and as other already pointed out by the time X2 arrives singlet will slip even further.
Ahh and a $600 X2 would not make any sense at all if singlet would dip so low - the point in X2's price is to be cheaper than 2 cards to lure more people into CF.... :rolleyes::rolleyes:
I expect a ~$250 5850, a ~$350 5870 and a $450-500 X2 by the time holiday season starts (late November)...
GTA IV, while not graphically astounding, was extremely graphically immersive. Granted the two aren't mutually exclusive and they could have done both, but it did draw you in.
Do you guys know about the microstuttering in 4870x2 compared to 4870 CF? Is it different?
GTA4 was very impressive, maybe not the best graphics out there, but its like nothing else, its a living world in a video game, just amazing. You gotta appreciate it, great work by the designers to make this world. Every GTA game since GTA 3 has been amazing, my fav. being Vice City, thats one of my fav games of all time.
I haven't used crossfire since I experimented with two 3870's and then later a 3870X2, but it might just depend on the game you're playing.
In my experience anyways I found that (for example) Crysis stuttered so painfully that it was completely unplayable while it was smooth and clean on my 8800 GTS, while CoD4 seemed stutter-free.
This is all anecdotal though. :D
Beyond Steam I play WaW, STALKERs, Fallout 3, UT3 etc and will start RF:G today (via Steam.)
Ahhh, Crysis...Quote:
In my experience anyways I found that (for example) Crysis stuttered so painfully that it was completely unplayable while it was smooth and clean on my 8800 GTS, while CoD4 seemed stutter-free.
This is all anecdotal though. :D
...that explains everything - it's not the card's fault, OK.:rolleyes:
Lol even the HD 5870 isn't going to be the panacea to Crysis, maybe the 6870 or GTX 480 will do it.
Regarding microstuttering: None on the 4870X2. For the most part it seems, ATI fixed the issues the 38xx series had so take that for what its worth
Now Stalker is a legitimately poorly coded game. If a company can't even fix obvious bugs in its code, what hope do we have that they optimize the game?
That's odd since when I got my 8800GTX, I prefer PC gaming hand down since it always looks better compare to the 360, of course when it's not a poor port from console. So you'll be game @1080p or 1920x1080 resolution and like I said, the HD4890 or GTX 275 can handle at that resolution easy. Your best bet would be wait until next week when the HD58xx series out and decide from there. I'm sure you'll great gaming experience with either HD5870 or HD5850 anyway.
Speaking of monitor vs TV, I know lot of people wanna play games on their big screen TV but I'm not one of them. I'm good with gaming @2560x1600 or even @1920x1200. Samsung is way to go since they're one of the best TV manufacturer out there, shoot for their latest 7 series if you could. :D
http://www.legionhardware.com/document.php?id=807&p=4 1920x1200 bottleneck is a GTX295.
Real CPU bottlenecks come probably during CF 5870, though we all know that single chip uses less processing power than dual chip cards, so a single 5870 does not necessarily strain the processor as much as a single GTX295.