4870 cheaper, GTX 260 faster.
Printable View
Explain?
As far as I know they where the best drivers, until Cat 8.5.
Cat 8.5 adress those performance and add some more performance and as far as I see no problems.
Or you are posting that just to give excuses? Come on, don´t made up invalid arguments.
From cat 8.3 -> Cat 8.5 there are huge improvements em performance and stability. Yes, lots of stability problems and bugs are gone. Read the release notes please.
The GTX 260 at $449 will be 25-30% more expensive than the 4870, and will need to be just as much faster to justify the price, as it brings with it no architectural, power or cooling advantages over the 4870. Not to mention that the 4870's memory bandwith will be higher than the GTX 260 if its GDDR5 Memory frequencies are high enough and will have overclockable shaders just like the GTX, so the 260 will not be amazingly more powerful.
Both cards should overclock well and though Nvidia has historically provided more headroom, this will be the first batches of the 200 series from Nvidia while ATI has been tinkering with the R600 based architecture for years now.
Hence my comment in the preceding post :)
Perkam
Doubt it, I reckon it should be pretty smooth with one card going by that chart, 2 x GTX280 will own it...
If you take into account that you may be able to get a 4870 which is supposed to be slightly faster than the 3870 X2 and will be much more consistant, for $300.2x 4870s for the same price as a GTX280 and without taking in account the fact that those benchmarks will be much lower in real unbiased circumstances, you might have great performance from ATI for $600...
Many people here doesn't want or need to justify the cards' prize. They only care about FPS. For them NV is the way to go. Overclock? Then the 4870 will be faster, and the GTX 260 will be faster too, so it will be the same again. And you don't really know how fast are them. You like estimating perfomance of the upcoming cards, but I will tell you again: it's useless. You know NOTHING about how doubling the number of TMUs will affect R600 architecture, and you don't know how and what is improved in the G200 design over G80/G92. Just stop the estimations, wait a few weeks and enjoy the REAL thing ;)
G200 is G80 architecture, RV770 is R600 architecture. Both are old (G80 6 months older actually), so that means nothing.
Original graph from JC:
http://resources.vr-zone.com//newspi...erformance.jpg
After some modifications...
http://i114.photobucket.com/albums/n...GTX200GPUs.png
Catalyst 8.4:
· A picture is worth a thousand words -> WiC avg FPS.
Catalyst 8.5 :
· Call of Juarez DX10: Performance increases up to 12% on systems containing an ATI Radeon HD 3000
· World in Conflict DX10: Performance increases up to 25% on systems containing an ATI Radeon HD 3800.
· Higher performance gains are noticed on systems contianing an ATI Radeon 3870 X2.
omfg; actual perf. results? i must be dreaming.
Wow, I'm glad you found a money tree and have unlimited money! Even to the wealthy, cost is always a concern. Just LOL at the thought of not caring about the cost of the card but only the performance for a normal-level consumer... you're way out of synch with how the world works.
Well, not too bad.
Call of Duty and UT3 are where crossfire worked splendidly. The 4870 should be close to those numbers, maybe even more considering nVidia cherry picked the numbers (Call of Juarez without AA, so obvious)
And in DX10 titles where scaling isn't so good, the 4870 is probably gonna be better still. Plus, Catalyst 8.3 = LOL.
8.6 fixes the DX10 speed even more I think
Person 1: My teams next generation graphics card will be better!
Person 2: No my teams graphics card will be better, and just wait 10 years for the better drivers...we will so totally wipe the floor with your cards!
Person 1: But ours will cost less.
and the thread continues...
haha ; the tip of an iceberg re performance results.
As mentioned before, I would not be surprised if the high end parts will be launched with high prices. In marketing its sa standard practice to use "Market skimming" to introduce top of the line products.
Market penetration on the other hand is applicable if you are introducing value line products. For who people complain about launch prices, better wait for a couple of months for prices to drop or wait a little longer for a die shrink.
It is a known fact that there are people who are price sensitive (like $2-3 price delta is a deal breaker for some) while there are others who are not price sensitive.
From experience if the person really wants to get his "wants" prices will mean nothing. I've have no qualms spending much on hardware, but buying a shirt or pants worth $10 to $20 can be a real PITA for me.
Bottomline, every product has its own market. If you can't afford it yet, just wait for it to be within your reach and not bad mouth the company for selling the products at a high price during launch.
That's actually a serious underestimate.
HD 4870 X2 should be ~2x HD 3870 X2 in performance judging from rumors. It has 50% more shaders plus much higher shader clock, plus double the number of texture units (serious bottleneck in R600).
Plus those are old drivers, so some of those games will perform 10-20% on the HD 3870 X2 already.
dont get drawn into that shyte. you dont want to pay that much, then dont. no-one's twisting your arm.
i'll take a couple of free 280's.
ta.
280's are a niche market, and unit volume sales wont be as high as other segments; who cares?
what im interested in is the 45nm die shrink in 1-2 years time that costs 150bucks :lol:
I think you are on a dreaming trip :p:
If its 50% faster you should be lucky. Still same amount of ROPs etc.
Sounds like the old "R600 GTX killer" all again. Currently AMD products the last 2 years seems to have been hyped to the sky, just to fail so hard compared to the hype on release day.
Instead of touching your e-p33ns togther...why dont you post links to what you (both) say??? Would a link to some facts of which you speak help your case out or do you (both) prefer to argue?
Has R600 been out 2 years? I thought its debut was May 15th 07?
Besides, the X1900 series was a damn solid card. If it didnt have the performance crown over the 7900 series, it had notably better by most IQ.
I sincerely hope they have a top notch card this round...no doubt!
Personally, I could care less about power consumption and performance /watt. Thats just marketing BS IMHO. I want competition, not significant idle power differences (I know this gen its going o be way different on load though).
I sure hope the gtx280 comes out on june 17th or sooner so i can step up.
I came across with this problem many times. The way to fix it is, set the Virtual Memory allocation to exactly the same Ram memory amount.
Example: If you have 2GB set VM to 2.048. If you set more than you have let's say 4.096 then your system will slowly be dependent of the HD to override the memory address into Memory Cache. That is the reason games tend to crash, not always but some of the times.
Metroid.
It depends where the application is being executed from. The cache will be created and will be allocated memory to that task. A higher Virtual Memory may help in some cases whereas a lower in others, to make the system more responsive to critical situations as it involves higher concentrations of cached memory, make a balance between them. The usual perception of this problem is very well seen when it is used with higher end GPU's as most of the shared memory may generate conflicts for their heavy memory allocation usage on some games.
Metroid.
I bet you can fry a STEAK on this setup.. 4 x 9800GX2 setup temps reach over 100c.. :eek: You can see the video here ------> YOUTUBE Can you say SUPER NERD...lol
http://www.techpowerup.com/img/08-05...c_internal.jpg
http://fastra.ua.ac.be/images/CUDA_100C.PNG
well, 100°C is still quite good for a wedged in card like that
i still wonder how those middle cards get an acceptable airflow...
edit: ah, prolly from the back :lol:
I keep my games and programs installed on the system drive which is the fastest), so most likely from the same drive(array) where the cache is, or from another one, in the same scsi chain.
I had some occasional stuttering playing AoC beta (avg 50fps) but i think was mostly cause of the server. I dont think at 1600x1200 it used more than 768mb of textures, but I could be wrong.
We're OT, but the PMs are not working, so I cannot ask to continue privately.
What AA & AF levels will Nvidias top end card Have? will they both still be 16x?
That's exactly what has me on the fence. Its how much effect the extra memory has. Since I play on a 37" TV now, I need something that wont get eaten alive by running out of texture memory at 1920x1080. I was set on getting a 1GB 4870, but now it looks like we have to wait for AIB partners to do that, which might be a bit down the road. GT 260 is looking like a nice option, but will have to wait to see how all these new cards fare. Dual 4850s for the price of a GT260 isnt bad either, but again - lacking memory in higher resolutions with AA/AF may chew it up
GeForce GTX 280 is in my hands ...! You can ask, but i am under NDA and i cannot to answer for every questions ...
That's funny considering you just broke NDA with that post ;)
true just start talking!!!!!!! hehehe
Well, is it noisy? How are the rams on the back cooled?
@ OBR: Is the shader 50% efficiency increase real or not?
the little green midget that's living with the dust balls behind my fridge, pulled the same crap on me this morning, right before I had my cofee...
so, what's really new?!
is it fast????????????????????????
CJ confirms lower idle power consumption on GTX280
http://forum.beyond3d.com/showpost.p...postcount=1967
http://img177.imageshack.us/img177/2...80powersj4.jpg
How mach FPS can you get at very high crysis or any new game :D:D:p:?
very nice, lower idle consumption then 9800gtx. :D
We see loads of small info's leaking here and there, many of them are involved via hardware sites, OC'ing contests and co... but all data will only be put out on the net after NDA or be pushed to withdraw info like T-break had to do with the 9800GTX...
OBR runs a Tsjech hardware site, how do you think he can publish eg the 18th a full review of the card if he doesn' thave it right now... you don't run 20 games and 5 benchmarks on a card in one day...
A load power consumption of 147w? That sounds like a load of crap to me, with it's massive die and power connector requirements. Thats less load power consumption than the 55nm 3870. That can't be anywhere close to the truth.
An entire 9800GX2 system pulls less than 400w. How much do you think the GTX280 by itself should pull?
http://www.techreport.com/articles.x/14654/9
People think graphics cards use insane amounts of power because they see a power consumption chart that says 250watts....
Thats for the ENTIRE SYSTEM....
for example :
http://www.bjorn3d.com/Material/revi...0GTS/power.gif
The entire system is using 284watts with a 9800gtx. So to say the GTX 280 will use 250watts by itself is a bit retarded imo
'm guessing its average rather than peak
yeah, ive got a gtx280, but i wont tell anything; i'll just gloat. :rolleyes: :sick:
plus i got it for free. :yawn2:
gee im so excited now :yawn2:
hummm ~150w a card..... can i run Quad Core with two cards in SLI on 750w power supply?
Faster than 2 x 8800Ultra 0C in sli?
Yes or no will do..
Not sure if this is old hat but it seems to confirm the stream processors and memory bus for G200 and HD4000 cards.
Hexus
just curious, anyone got any info on how PCI-E 1.1's bandwidth might affect GTX 280's 512bit interface? just for single card (oh i have EVGA 680i btw, not sure if its 1.1 or 1.0)
i hope not much cuz i kinda wanna wait until nehalem to upgrade my board again ...
yeah, didnt know it supports DX 10.1.. didnt fud said it wont? lol fud got it wrong again
How come there are no benches? I thought the date was June 3rd for people to post results
it takes time,
it's only like midnight in America,
and cards are only shipped to retailers/reviewers today
then they gotta unpack etc set up shop before some guy can steal a card to leak
the nda stands until release so the only chance of benchies are from probably the chinese sites as usual
benches may not be up for days, at least Computex should display cards
Hey all - didn't see this posted already.
some pics of the card have surfaced!!!
http://www.vr-zone.com/articles/Deta...ures/5826.html
http://www.vr-zone.com/articles/GeFo...shot/5828.html
:shocked::up:
Dang it Dangals you beat me to it!:clap:
Oh now i see, ur in OZ too:). The americans are still asleep;)
Not really :p:
lol.... neither am i :D
Looks like by GPU-Z that the GTX280 clocks are indeed what the Inq said
I think 2 x 8800GTX (two times, not 8800GTX in SLI) is to be expected from the GTX 280; just as 2 x 3870 is to be expected from 4870
I still can't get it why the stone is often dubbed as GT200 as the chip itself has G200 printed on it, with no "T". Even GPU-Z displays GT200.
:confused:
over @ vr-zone
Pics
Clocks and more pictures
looks like one power hungry monster:eek:
Dunno if this should hve been in the computex section, sorry for opening another thread then
cheers!
As always, nvidia's shrouds are awesome.
Some guy from another forum explained that GPU-Z doesn't prove anything and just retrieves basic data stored in its database for a particular GPU. Which is true.
So the writers of GPU-Z created the database entry for GTX 280 with those numbers (they took to the rumors or received information from NVidia) and it just displays those numbers. As can be seen, it can't detect the card's clocks.
Sure it can. Overclock your card by 1 MHz and run GPU-Z.