Nope, bump mapping is just a pixel shader trick that changes the color of a pixel to simulate detail. Tessellation happens much earlier in the pipeline and affects the underlying geometry of the scene.
Printable View
@skugpezz
So the overscanning is great in your opinion?I think that Eyefinity has quite a few problems but it's not the tech it's the developers and the support from ATi's software part.Anyhow I think 3D Vision is better after experiencing it and also has a larger install base in games all we need now are more 120hz displays.
Well actually I know that, but don't bump maps contain the depth information (that will be "fakely" applied to the object), and using the already developed bump maps can't this information be somehow converted to actual polygon info?
I'm not talking about doing this real-time, either. The bump maps could be taken and pre-converted into whatever, and then be used in game. Obviously I don't know how to actually do this or anything, but this line of thinking leads me to believe that this should be able to be done.
With parallax maps it could be done even better, if it's really a possibility
That was what I read in the article, however I have the opinion that Charlie can exaggerate and perhaps use a bit of an artistic licence when it comes to reporting. I am still hopeful for Widespread availability in March.
I thought the Radeon 58xx launch here in the UK was awful, units have only just started shipping here in very limited quantities this past week, are you going to say that Fermi would be worse? and that we would need to wait until MAY?
Perhaps the respins are not only for hopeful increases in clockspeed but for bug fixes too?
John
Normal maps don't contain depth, they store normals used in lighting calculations. The normal determines how light interacts with the object so faking the normal lets you simulate a different interaction per-pixel - i.e make you think light is bouncing off of the object in a certain way that gives the impression of additional geometric detail that really isn't there. But this all happens in 2D. There's no way to take the data from a normal map and simulate 3-dimensional polygon information. It would be like asking the GPU to take a jpeg and convert it into a 3 dimensional model of the scene that it was based on. Parallax mapping is no different, it's just fancier normal mapping that takes self shadowing into account.
Going from polygons -> more polygons -> pixels is easy. Doesn't work in reverse though :)
my computer desk isn't big enough for 3 monitors :(
Let's break this down because you make some good points. First of all, not much is publicly known about how well the Fermi architecture will scale when it comes to anything but the extreme high end. That is to say that it may take a lot of doing for NVIDIA to adapt it to lower end, more mainstream cards. NVIDIA's past issues with scalability is one of the many reasons why we are only now seeing lower-end 200 series cards. ATI on the other hand is an expert on scalability as is evidenced by the quick succession of 5000-series releases. Only time will tell what happens when the Fermi architecture gets scaled down but one thing is clear: ATI has shown that NVIDIA can't continue to use G80 derivatives if they hope to compete in the sub-$200 price category.
False. And no, I won't elaborate.
3x 17" ;)
I've already stated I am not the one under NDA. I know someone who is and they have technically broken NDA by talking to me. I am not going to risk getting them into some very hot legal water. And that is a terrible idea. I can't say for sure but I'd wager Nvidia doesn't sign too many NDAs on Fermi with private individuals. It would be very easy to create a short list of possibilities of people that broke NDA with a photo like that.
I would say 650 would be near the upper end of the possible clockspeed. I have been saying we would see G200-esque clocks for quite awhile now and no one wanted to believe me, though they were of the green tinted variants, just like how I have been saying that GF100 is very similar in size to the 65nm G200.
Yeah. Say good-bye to the estimated 40% advantage over Cypress.
He doesn't often exaggerate with the facts, when you can find them in his articles.
The UK has been getting a constant supply of cards, while that supply wasn't exactly huge, there was a supply. You seem to be implying no cards in the UK, which is false.
Yes Fermi availability will be worse, depending when Nvidia gives TSMC the greenlight on full production and if the yield numbers I have heard are still accurate for A3. I doubt TSMC will be running full tilt to ramp over the Lunar New Year and that Nvidia will have all the capacity they need to get all the boards ready in around 2-3 months for a real launch.
Yep. With still absolutely no whispers of any of the GF100 derivatives taping out and the end of the year coming, the window on a 1H launch for any of them is getting smaller and smaller, unless Nvidia pulls a miracle and launches on A1 silicon.
Like most have mentioned here, I don't think Fermi will have a 40% advantage over comparable Evergreen and I doubt NV will be able to hit their intended clock speed.
So it's a friend of a friend or something like that, right? :)
Actually, me too, a friend of my second-best friend's cousin told me that HD 6870 will release exactly 2 month's after Fermi's availability, and it will completely decimate Fermi.
But I can't back it up. You know, NDA and stuff...
I can see this thread going right into the sewer.
gentlemen: Please present your arguements without the personal attacks.
Also when making claims, back it up or it's just hot air..
Thanks for reading.
Hmm, I don't know Charlie, but he probably wouldn't exactly be thrilled to see this statement. :D
Anyway, if fermi doesn't have something similar to Eyefinity for gamers, in my opinion it's not something i'd even consider a worthwhile upgrade regardless of how 'fast' it may or may not be. Frame rates add nothing to an experience, at least for myself.