have you looked at dirt 2 benchs ? i don't see awesome dx11 power of fermi series :ROTF:
Printable View
As I said: I wasn't comparing ATI's cards to NVIDIA's with my statement. Rather, it was a general observation regarding the HD 5000 series' performance in a DX11 environment. C'est toute. ;)
HOWEVER, supposedly the GF100 series was built with DX11 in mind which could translate into great performance in supporting applications. But that is just a rumor at this point and only time will tell if it translates into better performance.
I like their Unengine Heaven video. Shows card does well in difficult situations.Quote:
A couple of online retailers have already begun taking pre-orders for NVIDIA's upcoming GeForce GTX 480 and 470 GPUs despite the fact that cards are a month away from launch. Listings for the GTX 470 start around $500, while the GTX 480 is over $650! Surely these can't be the MSRPs for these GPUs, or are they?
NVIDIA says no. The company has issued the following statement in response to the online listings:
NVIDIA and our launch partners have not released pricing or pre-order information yet. Any Web sites claiming to be taking pre-orders should not be considered legitimate.
Blog + Share: Digg • Del.icio.us
And with 50% more bandwidth, and 50% more DRAM, any 19/12 - 25/16 fps better be good.
Reviews and benchmarks are nice to fonddle over, but ultimately in real life nobody's gonna play L4D("1") or 3DMarks on their 480. They're old.
The heaven benchmark is the most advanced future looking, and thus far better indicator than old old DX9 crapola.
To those advocating the opposite, old school RE5 and UT3 benchmarks... fools. Does anybody really care about 240fps or 200fps? What's the point of replaying them on GTX480?
If that nice big performance gap shows up across the board in DX11 games, especially tough scenes to raise the min, nVidia's high price will be "justafiable".
In retrospect, expect PR to get mugged if all they have is HL2: 240fps vs 200fps of competition.
If nvidia really makes loss on every sold gtx 470 & 480, does it mean that nvidia fans will prevent from buying these fermi products to support nvidia and on the other hand ati-fans will purchase them out of spite? :D
It's 20% better than a 5870, it's nVidia drivers, it features PhysX (better with than without) and has better game support: It sucks? Wake up please.
Oh, did i mention individual game profiles for no constant messing with settings? Hardly a loser, it's late and still a winner. Lower end card (470) makes more expensive 5870 pale heh...
Oh ya one more thing i think that both GTX 480 and GTX 470 uses analog VRM setup and this is a very curious thing to do. The reason why i said curious is that when 5870 was launched with digital VRM i asked the advantages that it will have over analog system and i was told that digital VRM works good handling lower voltage than higher ones but analog VRM works better in handling higher voltage with stability.
When you look at Fermi it doesn't shout 'gaming' it shouts 'GPGPU', I'm still not convinced that nvidia's tesselation isn't shader bound. When you have a GPGPU that powerful you can get away with a lot of stuff on syth' benchmarks, as they tend to focus on one thing at a time.
I was excited too when I first saw the numbers, but I need to see where the power comes from, because when you turn off the tesselation in that section of the benchmark there is next to nothing there.
That's a no-reason, you can use other programs.
And Nvidia's panel isn't so awesome.
A rumor that smells like a bunch of PR stuff, just like AMD saying that their native quad core was going to be better than Intel's "sandwiched" one.
Hehe, the problem is that availability is going to be limited, so Nvidia knows beforehand what kind of losses to expect.
Curiously build post.Quote:
It's 20% better than a 5870, it's nVidia drivers, it features PhysX (better with than without) and has better game support: It sucks? Wake up please.
Oh, did i mention individual game profiles for no constant messing with settings? Hardly a loser, it's late and still a winner. Lower end card (470) makes more expensive 5870 pale heh...
20% better figure is for 480, and thats best case REAL GAME scenario.And for sure it wont be cheaper than 5870
470 on the other hand, is mostly inferior to 5870, and about price, i havent anywhere seen thats its gonna be cheaper than 5870.Cheapest 5870 on newegg is 379$ now.And i REALLY dont think thats gonna happen.(i would love that tho).
So how 5870 can be pale in comparison to weaker 470 is beyond me.Physx ? Few gimmicks with huge requirements arent for me.
Lets hope for a cheaper than 5870 tho, it would drive 5xxx prices to their intended MSRPs and i would have a chance of buying 5850 ;-)
LOL. Nice question, but who would want to burn a $500+ cash?:D If(apparently:confused:) Fermi is doomed to fail(sales-wise), then it'll be.
Hmmm, at least for Sony, the PS3 has a longer amount of time to make up for the loses, not to mention royalties.:yepp:
Just saying it, no hate for Fermi though, as a consumer.:up:
With all those positives in mind, we'll soon be seeing price-cut meteorites:shocked:. Time will tell in the next few months.:rolleyes:
pfff i dont buy that... its just as much based on Gt200 as rv870 is based on rv770, and its not a new architecture... its pretty obvious that if there was a clear goal with gf100, it was gpgpu... they went for massive shader core power, which, they CAN use for tesselation... but i really dont think that this was their original idea behind all that shader power...
Well, I guess we can agree to disagree then. Granted, the Shaders, TMUs and ROPs are still called the same thing but look at the difference:
GF100:
http://images.hardwarecanucks.com/im...00/GF100-5.jpg
GT200:
http://images.hardwarecanucks.com/im.../GTX280-83.jpg
Some of the main differences of the GF100 (there are MANY more)
- Large dedicated (and unified) cache structure
- Fixed function stages are now integrated into each cluster (or SM as they were called on the GT200) through the Polymorph engine
- Texture units now have direct access to the cache structure
- Scalability through unified ROP + L2 + memory controller structure
- Dedicated L1 Load / Store cache
- Incorporates a raster engine into each GPC which groups the three pipeline stages (Edge Setup, Z-Cull and the Rasterizer)
- GDDR5 memory controller
- DX11-necessary components such as a dedicated tessellator, etc.
So for the last 3 months Nvidia talked about Uniengine and then Uniengine and more Uniengine and finally Uniengine. And then takes the best 5 seconds from all the benchmark run, makes a graph and then proudly shows it everywhere.
What does all this mean? You decide.
look at what? those are two different ways to illustrate a gpu, of course it looks very different :D
evolution
tweak
evolution
tweak
evolution
tweak
thats a standard component, isnt it? its nothing special...
weeeeelll we will see about that :D
i have to reword my statement though, there is def a lot more new stuff and improved stuff in gt300 vs gt200 than there is in rv870 vs rv770... rv870 seems to be 99% of a double pumped rv770, in gt300 there are at least a lot of tweaks and improvements over gt200, even though i wouldnt call it a new gpu. i think it doesnt deserve to be called GF100, it shouldbe GT300 imo... just like rv870 is just that, rv870, and not cypress, some new architecture or design etc... but hey, who cares :D
i think its funny that marketing has penetrated some chip companies so deeply that they even give their asics marketing friendly codenames :D
saaya, I'm sorry but if you think the geometry and cache changes on Fermi are simple "tweaks" like we're used to then you should stay away from arch comparisons.....
Of course everything between sky and earth is a kind evolution or tweak of that :p:, but the "Dedicated L1 Load / Store cache" is closer to a revolutionary step in this round. This gets Fermi much closer to become a supercomputer, because it is the part that support generic C/C++ program (very much like a x86 CPU would do).
This cache sits in-between all shader clusters and can be accessed by all of them. This makes it possible to have unified read/write cache, which allows program correctness and is a key feature to support generic C/C++ programs. This is actually pretty revolutionary step i would say.
no. both digital VRM and traditional step-down buck converter, can output any voltage from 0 - 12V. For both digital, and analog component (ie cap, inductor, diode/MOSFET rectifier) selection determines max current, transient response (ie ripple current) etc.
Caps are high and block heatsinks. AMD digital VRM is better for heatsinks. Multiple-phase solutions are easier to link up. And can better manage switching... ie reduce voltage/clocks/current if overheating.
Its kind of like old school jumper/BIOS overlocking vs using setFSB in Windows - the later obviously easier and more convenient.
I've been wrong but I dont think so. TSMC 40nm is still far from "mainstream" and quite expensive. Only reason for AMD to drop down to $300, would be if Fermi launched at $300 - infinitesimally improbable.
Does anyone think dual GPU Fermi is possible at all (GTX470, and downed clock speeds like the 5970) without requiring a nuclear power plant and generating as much heat equivalent to the surface of the sun?
No Fermi is the first mass market GPU architecture to have:
#1: Parallel geometry setup
#2: Generalized, coherent read/write caching
Both are huge deals because of the engineering effort required and make a lot of things easier to do. Of course it doesn't mean squat if you just care about the fps that comes out the end.
My recommendation to ATI would be to adopt/adapt the CUDA api from nvidia and get on with it. Let's be honest, nvidia has the clout to raise a stink big enough for everyone to notice. Their connections run deep.
Unless ATI is planning an API release secretly. I really hope they aren't, waste of resources.
@triniboy: I'll check it out. This gives me something to research :)
I believe ATi could adapt CUDA easily, if they want to.
But i agree with you, we need a common API for assessing a GPU (just like AMD and Intel CPU that csn run a sett og common instructions and programs).
Somebody has to provide a common platform to take this further to the next step where everybody can have a personal supercomputer :D, but these guys (nVidia and ATi) are too busy fighting each other, and Intel will actually see both of the dead and defeated in this area, Because a great GPGPU platform can threaten the Intel dominance in those very suspensive supercomputer marked.
There are two other reasons to drop the price on the 5870 to $300
- If the 480GTX comes in at close to its current price/performance to deny any signficant marketshare. If the 480GTX comes in at 5-10% higher performance than the 5870 would the average upgrader still choose it if the 5870 was 25% cheaper (probably not)?
- If AMD/ATI counters with a 5875 (or whatever 5870 rev2) at the same price point of the current card (to retain "performance crown"), the "old" 5870 wouldn't sell at all at the current price.
I'm inclined to say that its very likely AMD/ATI would price cut, I'd probably bank on it being closer to $50 though (I would be quite happy if it was $100).
Ati definietely has price flexibility right now.Dont forget that ATI prices have risen above launch msrp.They are cheaper to produce, they have them for months, and have had better yields from the start (6 months ago).
Thing is, if they will not feel threaten by fermi, they probably wont :/.MSRP for 5850 on launch was 259$ ,it stands at 300+ now.
Yep, exactly. These are very big deals in GPU evolution (if not revelation) right now. All GPUs have a HUGE amount of Gflps (compared to a CPU), but how do you control/program the beast to do something more useful than just gaming?
That dedicated L1 cache plays a big role for making the the life much easier to program/control the beast. This makes it possible to have a unified read/write cache, which allows program correctness and is a key feature to support generic C/C++ programs.
Fermi is a complete arch redesign, like g80 & r350. Focus on cache, compute, & gpgpu programmability. cypress is rv770 with alu's & rops doubled and scheduler & setup redesigned to effectively use new resources.
It's not a G71->G80 jump, but it's not a G92->GT200 one either
Not sure if this has been posted before. Card is supposedly the GTX470
http://img.photobucket.com/albums/v4...r/d276985e.jpg
Yeah, it's been posted
It's been posted, but I just checked anandtech's reviews and the 5870 gets 38.5 fps on average I presume, in warhead @19x12 4xAA Enthusiast... granted anandtech's setup has a faster i7 (+130MHz?) but I don't think that's enough for a ~25% increase in frames. I was trying to look for a review with 8x AA but found none atm.
edit:
of course it depends on the part of the game that was ran, but I was thinking they used the built-in benchmarking tool
Regarding the tesselation performance of Fermi, I'd wait to see the comparison in real world games. IF (and I don't know if that's true, false, or somewhat in between) NV100 relies more in CUDA processors to solve tesselation calculations and RV870 relies more in fixed performance dedicated hardware, I'd expect the first to have a huge advantage in a nearly synthetic benchmark with most of its load being tesselation (because Fermi could use more resources to do it), but then, the situation would rebalance severely when complex shaders should be computed in addition to tesselation (like it would be the most likely case in most real world games). The heaven benchmark seems to be pretty heavy on tesselation, but much lighter in any other kind of shader. Maybe that's why NV are focusing so much on Unigine Heaven benchmark. Maybe not. We will see... when we have proper reviews and real world use cases.
Same like with general performance... I find hard to believe numbers like what Charlie is claiming.
They can't do that. If they did, CUDA would quickly become the standard of GPGPU in mainstream market, most of developers would use it and that would give NV a huge competitive advantage against AMD/ATI. Even if it would only mean that NV could know much sooner than its competence how CUDA would evolve in the future and be always a step ahead (with no chance to catch up) in mainstream computing. And they always could take much more advantage by exploiting their rights on their propietary API one it had been settled and standardized in mainstream market, similarly to Creative Labs with EAX.
My impression is that AMD is focussing on gaining some market share in mainstream market while NVIDIA tries to open and take the emergent HPC market, and I think that could be a mistake in the long term... except if they directly don't want to enter in that market. Which could be another different mistake. In mainstream... well, I think that's more the terrain of non-hw vendor dependant solutions like Direct Compute and OpenCL.
Yeah, I have always thought that was going to be the worst problem for Fermi since we knew the cards would be SO late. Competing against a hardware product (even more talking about graphics cards) which life cycle is 6 months older is painful. The price of such products always shows a diminishing curve along its life in a natural way, and a 6 months delay on a product like this is too much IMO. Indeed I think that the current prices of HD5800 series at this point are not natural and only due to not having any kind of competence until now. I'd expect price drops as soon as GTX400 cards are released (or little after that) unless Fermi is worse than what I think it will be.
If nVidia have not been able to reach the full potential (512) of the GT100.
How long do you feel it will take to release a card that has everything working at full tilt and can be meet the consumer demand for it?
Month, two months, this year?
I really hate the way are taking the graphics cards lately...
1. We're gonna need a thermonuclear plant to feed them.
2. Grab the sauces and bring some eggs to fry them with your card!
3. Get that guarddrobe as PC box or that large graphics card won't fit inside!
4. Do the IHvs know that we're in the middle of an economic crisis? I cannot pay 800$/600€ for a graphics card omg!
Now look at your mobile phone... the technology should consume less power, get more features with less money, occupy less space and to be more easy to use with the time...
See this old 1990 image about the absurd of the card's dimensions:
http://img169.imageshack.us/img169/3728/fastm.jpg
and compare it with this one:
http://img251.imageshack.us/img251/158/5970.jpg
( I expect the Fermi GX2 version to be almost as large as this.... red brick )
Have we evolved REALLY?
:shrug:
For me, any highend card costing more than 200$/120€, consuming more than 70W and larger than 20cm(7.9 inch) aimed to the desktop is absurd. Simply absurd.
So I don't care if the GTX480 has 512 shader, etc... What I want is a high-end card with a reasonable configuration! Not those power-hungry, expensive and hot monsters! Of course, you can get more FPS and performance making the cards larger, using more energy, etc... The merit is to do it without growing the size and the power consuption... so nope, I don't like the GXT470/480, but neither the 5870s/59XXs.
^ have we evolved????
http://jameskennedy.com/wp-content/u...10/01/Doom.jpg
http://www.hardwired.hu/img/wg/2/743/Crysis_44.jpg
no...not really...
/sarcasm
i think you belong in the midrange card segment :)
im not really concerned with power or heat ..i want performance/quality
http://www.semiaccurate.com/2010/03/...ugh-thrashing/
??? Really neat PCB.
No.
IMHO 5870 and 5970 were lauched with Fermi competition already factored in.
Regardless of what nVidia does, prices wont be less than HD48xx series - they are bigger/more expensive to make.
Exactly. AMD can let nVidia "steal" all the market share they want - not sure they can make a million cards though... :rofl: :ROTF:
Couldn't agree more. Never in history has a launch card using a new API been able to master it first generation. I doubt we will see strong DX11 cards until at least 3rd generation, perhaps 2nd gen refresh best case. That said, what they did with Cypress was a smart buisness move none the less. I'm sure Nvidia will push DX11 now they can call it relavent but if the past repeats itself, I am not buying it (their marketing). Hell I had a 8800gtx launch week for a good 14 months and never felt DX10 was relavent until much later...
Yep, I had a G80 earlier, and while it crushed DX9 games, it could not run DX10 as well as the 2nd generation of cards (G92s were better, then RV770 & GT200 finally saw DX10 be relevant).
The best bet is always to buy the card that you can use now, and not what you will worry about a year down the line because by then, it will already be outdated/slow
http://bbs.expreview.com/thread-27750-1-1.html
Farcry2 2560 x 1600 Ultra 8xAA Max
5870: 17/32/55
GTX4x0: 30/40/67
http://i45.tinypic.com/nwxawl.jpg
http://i45.tinypic.com/j9c4d4.jpg
Where's it say GTX 470? It only says GTX 4x0
And the thread talks about a memory bug, so I wouldn't be surprised if the extra memory of the 4x0 is helping it out at 2560x1600 vs. the 1GB on the 5870 (will be interesting to see what 2GB does though)
Any who, Far Cry 2 was one of the games that Nvidia PR highlighted earlier, so it's nice numbers, but we already knew FC2 was going to run well on Fermi and Nvidia hardware in general, so this doesn't tell us a whole lot right now. Can't wait to see some other games benched
yea finally a card with good performance at 2560x1600.
Yah, zerazax thx for pointing that out. I changed it.
These watermarks are the same on the benches of Heaven. And the heaven benchmarks are the numbers from that first bench chart with chinese writing. I guess phk has 470 & 480, or either just has one 470. It's true the 5870 takes a big hit on FC2, 2560x1600 8xAA, probably ati's toughest challenge with AA.
http://img.photobucket.com/albums/v4...r/d276985e.jpg
http://i46.tinypic.com/2a8ncsz.jpg
2560 x 1600 with 8xAA. Nice.
Exactly my thoughts. I'm not buying NVIDIA's magical tessellation performance story yet.
I have doubt that NVIDIA simply invest so much in dedicated tessellation hardware. It's probably more likely that they use the "cuda cores" to do tessellation in exchange for shader performance.
If that's the case, their DX 11 driver will have some delicated load balancing to do. It must decide how many cores should be reserved for tessellation for each game. Not neccessarily an elegant solution.
Until we see performances of real games, I'm not holding my breath.
I think the key for them is that they increased the # of triangles done per clock.... the games that Nvidia PR has benched heavily have been games like Far Cry 2, HAWX, and the Heaven benchmark... which are all known to be pretty triangle intense
yeah... and looking back its hard to think of any gpu architecture in the past as revolutionary, the only ones i can think of are r300, r600, nv30 and G80... those are what id call really different architectures, everything that followed them were more like intel style tocks, a shrink with tweaks...
right, ill give you that, that is pretty revolutionary... but the fact that this isnt really all that useful for games says a lot, doesnt it? as a gpu, fermi isnt really revolutionary imo and is more of a GT300... if youd use it for gpgpu then calling it GF100 makes sense... but thats just semantics :D
definately, even with two full blown 480 chips its definately possible... but it would run at very low clocks and the final result wont be that much faster.
mhhh i wouldnt say complete... its more than the last step from g92 to gt200... id say the rv770 to rv870 step is about the same as the g92 to gt200 step.
yeah but how representative is farcry2 performance?
everybody knows by now that gf100 rocks in fc2, but that game is not exactly brand new, everybody played it already, its not a game youd want to play two or three times as its really repetitive the first time already, and it runs perfectly fine on even a gts250 at pretty high res and max details.
2560x1600 8aa perf is nice, but who has that big of a display?
why would anybody spend so much on a big display for games? at that res you depend on sli or xfire to get enough perf to play the top graphics games on the displays full res... so cool, you will be able to play fc2 at that res, more or less, but what about other games such as metro 2033, hawx, crysis warhead, crysis2...?
i dont see the point in showing off 2560x1600 8aa performance... its better than the competition but still not good enough to actually play with that setting...
its like showing off 8fps vs 1fps at some insane resolution and claiming you are 8x faster... yeah but whats the point?
you mean would people chose a 300$ card or a 400$ card if the 400$ card is 5-10% faster? how can you say they probably would??? its pretty obvious that they would go for the 300$ card, isnt it?
while lots of people bashed charlie for claiming that there will only be 5-10.000 cards at launch, thats what everybody is reporting now, and even nvidias ceo admitted that they wont REALLY launch until the second half of this year... so why should ati drop their prices and cut their margins if all nvidia can do is offer 5-10.000 cards in a few months which is what ati probably sells per DAY!
2GB 5870 is 20% faster than 1GB 5870 in Farcry2 3X1920X1200 8XAA
Obviously 1GB vram is a bottleneck in extreme setting.
http://www.techreport.com/r.x/radeon...1gb-vs-2gb.gif
about dual fermi, look at dual gt200...
http://www.xbitlabs.com/articles/vid...0_5.html#sect0
285 = 160W
295 = 200W
480 = 280W (rumored)
495 = 300W (tdp limit)
going above 300W is painful... you need very expensive cooling above that point, and you break compatibility with pci-e specs, so i think thats unlikely.
so for gt200 nvidia had to cut down a single gt200 card from 160W to 100W, a 60% reduction, and they achieved that by lowering clocks to 576/1200/1000 from 648/1476/1242 which means a reduction of 12%/9%/24% and they cut the memory hub down from 8 to 7 channels, which means a bw reduction of 13%. so id say on average they cut its specs down by around 25% and that allowed them to reduce the power consumption by 60%. thats a very good trade off!
for gf100 they will have to cut down power consumption by ~90%... we dont know how gf100 scales, how well does it clock with lower voltages, and how fast does the power consumption drop when they reduce voltages? gt200 did very well there, if we assume its the same for gf100 then we need to cut the perf specs by 90% or 1/3 more than for gt200. in other words, the specs need to go down not 25% but 33%. thats a very optimistic guess, and in this case a 490 would perform about 1.5x as fast as a 480, so 50% faster, best case! since sli doesnt always scale lineraly, the average perf boost would probably be around 40% or less.
a dual gf100 card would easily cost 1000$ if not more, would be even hotter and power hungry than a 480, and would only be 40% faster... i think this doesnt really make sense... from a business point of view nvidia will make more money by selling single gpu cards, as even if they offer us a good deal for the dual gpu card, it will be very expensive and too expensive for 99% of us for sure. the only reason to do this would be pr...
but if you followed the numbers, you will notice that, assuming the numbers we heard are true, and a 480 is only marginally faster than a 5870, a 490 would barely beat the 5970... it would, but not by much... and it would cost a lot more... and amd is prepping 850mhz+ clocked 5970 which are 10-20% faster than current 5970s, so a dual gf100 card doesnt make much sense at all right now... they have a limited supply for gf100 chips to begin with, they are rumored to have slim margins even at high prices, and they are rumored to have tdp problems... im sure there will be a dual gf100 card, but i dont think we will see this anytime soon... Q3 the earliest if you ask me...
Yes I agree with your last statement. They want to show off benches like this to paint the 5870 in the worst light possible. I don't consider it representative of anything important, but it is one of the only few benches out there, which is why I posted it here. Not many folks have a 30" and want 8xAA along with it. I don't consider it showing off, just relative data for this thread. Maybe we could ask him to rerun it at 1920 4xAA - if so, we would probably get results similar to his dirt2 and crysis numbers.
Totally agree with the 8fps vs. 1fps scenario. Nvidia claiming, "Impeccable 800% performance over HD5870!"
If anything about charlie is write, its that NV architecture is being held back by clock speed at the silicon level. If it is true, hopefully NV with its next spin can introduce vastly better clocks and with a shuffling of things.
I remember the best respin and architecture shift for them was the 7800 to 7900 gtx generation. They jumped up from 430mhz up to 650mhz. There was even less transisters in the 7900 gtx. Although some of the clock speed was from using a smaller process, I think some of the change had to do with NV rearranging things. That needs to happen ASAP.
A 1700 shader clock with good driver's should be competitive even with the 5970 even with the current leaked performance.
If what you said is true, then nVIDIA lied in a public technical specifications pdf.
And you're wrong about load balancing via the driver.
That would be the silliest move ever.
"CUDA cores" can be dynamically assigned to various tasks and load balancing should and is done by hardware not software.
Not an elegant thinking I'd say :p:
One thing I worry about is game development. For example, if you use MW2 as an example, only 13% of the copies sold, if I remember right, were for PC. And even with "only" that percentage it was still a huge-selling PC title. I disagree with folks that say PC gaming is dying, and I will religiously defend my 4.0GHz i7 rig with all it's GTX and SSD and RAID 0 goodness (soon to be watercooled as well!). And I also agree with those that make mention of the digital distributors, such as Steam and D2D.
HOWEVER ... Even with all those above considerations, consoles are still king when it comes to overall sales. And from a profit maximization standpoint it makes sense. If you're a dev, don't you want to develop in an arena where you'll sell the maximum amount of copies possible? And as an aside, isn't it nice to develop for the SAME PLATFORM, year after year after year???
I know DX11 brings some cool stuff to the table, but with a stack of games on my "to play" list that don't even have DX10, I am starting to have a hard time agreeing with such aggressive pushes to new standards. Look at the spot dev's are in right now. If they want the high end enthusiasts' attention, they're going to have to go after DX11. But how many people out there have DX11 hardware? The number is growing, but there are still millions of PC users, and even games with decent hardware that lack support for DX10. So now DEV's have to make new games support DX11, the latest and greatest, support DX10 also, what's supposed to be the current mainstream, AND write in compatibility for the older DX9 if they also want to be able to sell to that market segment, which is still quite large????
ARE YOU KIDDING?
I hope this isn't having a counterintuitive effect on PC game development, and I know for some companies, it is not. They will go after that market no matter what. But I still worry that it's putting them in a spot where they are having to do more work for an even smaller part of the market. And if they want ALL the PC market, they're going to have to do even more work.
I'm all about leaving an older engine for a newer, leaner, more efficient one, but I wonder if we're pushing a bit too fast. Maybe it was ok back when consoles weren't such an overwhelming majority option, but today things are different.
Just some thoughts. I might be crazy though. Way past my bedtime.
well, when is something accelerated in hardware and when is it just being emulated? its hard to tell, and as soon as hardware detects a certain type of code or can switch into a different mode of operation which is faster for certain code, you could already call that hardware acceleration and dedicated hardware for that code... even if the same logic is actually able to process very different code... the propper definition for dedicated hardware or hardware accelration is to have some logic that is specific to a certain type of code and can ONLY process and accelerate that code and can not be used for anything else. im pretty sure thats what tesselation in gf100 is NOT... it simply would go against nvidias design goal of having a general purpose monster flop throughput processor...
so did nvidia lie? i wouldnt say so...
and even if they did, there is no way to prove them wrong, it will always be an argument of different interpretations and definitions... and in the end, like i said before... if it performs well, who cares? :shrug:
will it actually perform well in a real world scenario is something that we are all curious about though... i cant wait for avp and some other actual game benchmarks with gf100...
in the end i dont think gf100 will be vastly superior in tesselation compared to rv870... it will be faster, yes, but the difference isnt 100% like nvidia first claimed... in a synthetic benchmark, the best case scenario, in a certain scene, so the best case of a best case, gf100 is double as fast as rv870... in reality the difference is probably below 50%, so 30fps vs 40fps in avp or something like that...
tesselation is NOT a killer feature for gf100 if you ask me...
Asuka talked about GTX470 again,but no benchmark numbers because of NDA.
http://we.pcinlife.com/thread-1369198-1-1.html
GTX470 is slower than HD5870 in DiRT2 and STALKER COP
1920X1200 GTX470 is 10% slower than HD5870 1GB
2560X1600 GTX470 is as fast as HD5870 1GB
GTX470 TDP is much lower than 300W,but significantly hotter than HD5870
How did GTX 4X0" kicked" 5870's A*s, DX11 Tessellation image Qualtiy Comparing
http://www.nvnews.net/vbulletin/showthread.php?t=148635
5870 kicked NVIDIA 480's A*s, DX11 Without Tessellation Performance
http://www.nvnews.net/vbulletin/showthread.php?t=148636
and this:
http://bbs.pczilla.net/attachments/m...b47f6b7056.jpg
:rofl:
I heard NV locked the 470 speed in the Bios, nobody can oc 470 rightnow, the New AIC vendor Colorful even didn't get a 480 sample from NVIDIA.
NVIDIA hasn't decided the speed of 470
Depends on which games you're talking about. Future games will make much more use of compute shaders and hence the distinction between games and general computing will begin to fade. The software is badly lagging the hardware at this point so it's hard to see the benefits on anything more than an academic level but hopefully that changes soon.
I still wont reach to conclusion based on that pic. the two pics are not exactly same. There is slight difference in angle and time at which the screenshots are taken.
Will reach to conclusion when I get the 470 for the review ( have been assured of one, fingers crossed :shrug: )
Little OT, just for fun : http://thumbnails16.imagebam.com/708...4c70848239.gif
Thats what $320 5770CF can do in Unigine 19x12 4xAA 16xAF stock speed with tessellation on :p:
Saaya, I thought we had a PCIe rep confirm that neither the HD5970 nor the GTX295 were certified as PCIe compatiable, because they could pass the 300W TDP limit. So if the GTX495 goes over 300W, and Nvidia can sufficiently cool it, losing the PCIe compatiability should not be a big deal, right?
its getting ridiculous how many people think gf100 does not have fixed function tessellation. if they are that incompetent maybe they should hire people off of tech forums to architect their gpu's. most people probably dont know the difference from fixed function logic or programmable logic anyways.
what they did was fairly simple. gf100 basically is setting up the scene in parallel compared to serial setup of other gpus. it works well for all of the small triangle tessellation creates.
OMG, lets hope the angle or other stuff is making these big quality differences, because otherwise one may conclude there is a big flow in ATi's implementations of the most important future introduced by DX11. That conclusion would make the 5x-series pretty useless for Dx11 games, compared to.
But lets hope the angel, or other stuff is making the difference. Also lets hope the DX11 games never come to the marked :p:, because otherwise he conclusion would be devastating to ATi.
The 5870 isn't incredibly fast on Dirt2 in DX11, playable yes but you're losing a lot of FPS. It wouldn't be that hard to prove the case eitherway, but in Dirt2 tessellation is done on top of everything else, not instead of something else, same with AvP.
I suppose in a few weeks we'll see the numbers but the fact that they are missing is worrying.
What's there to prove? Nvidia says there are dedicated tessellation units. They designed the chip. What more do you need?
Seems like you missed (or intentionally forgot) the images posted on page 76.
This is how Nvidia looks like, yes the same crappy quality, the trees look horrible, the roof awful.
http://bbs.expreview.com/attachments...be60baceff.jpg
Enough with the humongous pics people. Olivon, what's the source for that pic?
:rofl:[/QUOTE]
How could i forget such a crap? It looks very much like the 5870-results on the provides link pages. but I didn't see it, this thread runs fast you know. But you can see a HUGE quality differences with this one:
http://bbs.pczilla.net/attachments/m...572283563a.jpg
Which is provided by these guys:
Something is not adding up, but lets hope and see some more reliable sources will confirm your view, because otherwise he conclusion would be devastating to ATi, as said.
First of all, christ people, Unigine looks pretty much identical on both ATI and NVidia, if it werent, one camp or another would call out CHEATER!
Second of all
We have already got to the conclusion that using 2560x1600 on a 1GB card causes dramatic fps drop due to the framebuffer taking such a large amount of memory.Quote:
Olivion
For the people with monstrous 30" displays there will be 2GB 5870 shortly in few different flavours(including OCed one).
So again these results arent exactly comparable.
http://bbs.expreview.com/viewthread.php?tid=27741
Don't know if it's legit or not :shrug:
did you read what mao posted and what i replied??
That thread is suggesting that 470 is compromising on image quality based on the pics posted there.
What I said is that you cant draw that conclusion as the pics are not from same angle, and so called inferior looking 470 pic may well be because of the difference in the camera angle of the pics.
Seriously, read before quoting people.
See the pic I posted, its on ATI setup and has same scene. And Trees look identical to what 470 is showing. So I was questioning the guy who reached conclusion that 470 is performing better because of compromised image quality.
How could i forget such a crap? It looks very much like the 5870-results on the provides link pages. but I didn't see it, this thread runs fast you know. But you can see a HUGE quality differences with this one:
Which is provided by these guys:
Something is not adding up, but lets hope and see some more reliable sources will confirm your view, because otherwise he conclusion would be devastating to ATi, as said.[/QUOTE]
What are ATI's drivers? Some people said that Cat 10.3 boosts ATI up to 30ish fps in it
Any who, I don't get your conclusion at all, why does ATI have to worry considering that the 5870 is faster than the GTX 470 at 1920 x 1080, ties it at 2560 x 1600, and the 2GB editions are likely to beat the GTX 470 and probably get close to a GTX480....
And that's not even counting if they've bothered with a 5890
http://www.pcgameshardware.com/aid,6...eviews/?page=5
and blurry trees on 5870?Quote:
After extensive tests and direct comparisons we conclude that the HD 5000 AF is equal to the Nvidia driver default ("Quality” including "Trilinear Optimization”) - except some games like Crysis. With HQ-AF without "economy measures” a modern Geforce still filters more homogenous than a Radeon HD 5000.
http://img717.imageshack.us/img717/2...0517220684.jpg
the roof looks the same on 400 series as 5000 series.