it would be nice for this thread to actually contain news in it
Printable View
it would be nice for this thread to actually contain news in it
2009, TMSC 40nm for graphics alone: HD4770, nVidia GTS220, GTS240, HD58xx, HD57xx
Now in Jan, add HD56xx/HD55xx.
Seems like getting 2 millions shipped units would be tough sharing capacity for all these different chips. But 2 million is a drop in the bucket compared to global PC sales (most of which are notebooks) of HUNDREDS of millions annually.
Lets assume the 2 million is HD58xx only. It launched Sept 23, 4 months ago. So about 500 000 / month. Each chip is 334mm^2. 300mm wafer has area of about 70 680mm^2. Thats 211 (although some are cutoff). Conservatively Guess-timate by multiplying by 20% (not yield), we get 42.
So our worst case scenario is 11 900 wafers per month. Well bellow the 40 000 wafers / month capacity.
In reality most shipments have surely been the "low end" HD57xx which has tiny 166mm^2 die size. With only 6000 wafers per month, AMD could have easily shipped 2 million DX11 products.
DOUBLE POST, sorry
It's easy to say they're not in trouble, when you have no faith in ATI. ;)
We'll see how this all lines up in reality, but Nvidia reeks of desperation too much for everything to be A-OK. At the very least I don't think they're going to make the sort of profit margin they planned on.
GTX 295 not in stock
GTX 285 in stock around $390
GTX 275 not in stock
GTX 260 in stock aroun $190
5850 in stock around $290
5870 in stock around $399
4870 in stock around $155
4870 is cheaper than the GTX 260 and not only beats it but gives problems for the GTX285 in some game.
5850 faster than GTX 285 in everthing and $100 less expensive also supports DX11.
5870 around the cost of the GTX 285 can take on the GTX295 in most cases tough it does not mean it will win every-time. So no Nvidia is not ok at their current position they need fermi and derivatives sooner than later.
my main rig has a 5870, my secondary rig has a gtx 285 2gb, 3rd rig has a gtx 275. I like the 5870 obviously, but i like nvidia's drivers MUCH better.
We're comparing drivers of Cypress (4 months old now) to the very mature almost two year old gt200 drivers? lol
oh.. and fudzilla Isn't actually the first to say such a thing
I can honestly say I never had a game fail on me or be unplayable because of bad catalyst drivers without there being a workaround.
NVIDIA already announced they were pulling all 200 Series from production except GTX 275 based cards (295 and that 275 Co-Op by eVGA). If they aren't in stock that is a problem with the retailer, or NVIDIA is focusing solely on getting the GT300 out and skimping on quantities of the GT200b cards.
It's really funny how many people are in this thread defending their ATI purchases, over at Guru3D they ban you for talking about ATI hardware in the NVIDIA section/thread. Thank God for XtremeSystems :)
Also the 275's are in stock, I don't know what the user a few posts above is talking about, hell here is the 275 Co-Op card which should be the hardest to fine (GT200b + G92 on single PCB): http://www.newegg.com/Product/Produc...-527-_-Product
But honestly this close to the GT300 who is going to buy a GTX 275 or 295 now after waiting for so long after their launch? Any NVIDIA consumer looking at their cards is going to wait until the GT300 is here.
theres still a healthy 3dfx fan community around... its mostly asleep, but just go around and say something bad about 3dfx and youll get poked from all sides :lol:
thats the worst thing... if 3dfx would still be around, it would just be another company, but they died and as a result somehow reached martyr status in the minds of many gamers and enthusiasts... all the bad things and problems are long forgotten, and the good things are exaggerated beyond reason :D
its very possible... xbox's xenos was supposed to be a desktop gpu but was cancelled/skipped, instead ati double pumped their previous design and built on that... both ati and nvidia have done this several times, and even intel and amd... what ever happened to K9? ;) it was skipped/cancelled pretty early in dev process... intel wanted to launch an x86 SOC several times, they failed and were cancelled... what happened to tejas, the next thing after prescott? skipped/cancelled as well...
you never had an X2 card then i guess ;)
its really ironic how nvidia and sli push dual gpu cards as the highend gaming solutions, but when comparing gaming experience they tend to have waaay more issues than single gpu highend cards...
dual gpu cards only make sense for huge res 24" or 30" monitors, but even then they have serioues issues... what do hardcore gamers play on? the fastest single gpu cards hooked up to 19" tfts or even 17" or 19" crts... and theres a reason for it...
I see nothing wrong with the Co-Op. If anything, it is a unique card that shows the industry and consumers we should actually HOPE that more companies come up with new, innovative and interesting ideas. If people begin ****ing all over it, shame on them.
I for one am sick and tired of reference-based cards popping up everywhere I look.
As for the lack of other high-end NVIDIA cards (GTX 275, etc.), it was a smart move popping them into EOL status. They couldn't be produced at a competitive price so instead of loosing money on every card sold, they were axed. Seems like a smart move to me...
yeah very smart move now they don't have anything against 5000 series or even 4000 series :p: if they plan eol everything cause of there production cost i fear that fermis live will be short too :p: as for co op thats total rip off with only 2 or 3 proper physx titles you must real embesile to pay 369 while you can get 5870 for 20 more
hmmm well ati was lucky that MS picked it up, for a console it was perfect... 360 wouldnt have lived half as long without it... i think a big part of ps3 failing is due to the 360 performing that well even now in its late days...
but for a desktop gpu r400 would have been worse than r600 i believe...
what what what? :o
you see nothing wrong with a card that bundles a castrated g92 that can only be used for physix and not graphics processing? how about the fact that it is prices higher than buying a gt200 and g92 seperately? still not convinced?
how about nvidia recommending everybody to buy a 260 as a physix card just when the co-op was launched, because "a g92 is too weak for physix acceleration in the latest and upcomming titles"?
If you read what I wrote, you will see my statement has nothing to do with price, performance or anything along those lines. Rather, it was a comment that more companies should be taking EVGA's lead and thinking outside the box.
As for the Co-Op, it has its uses. Systems with a single PCI-E slot, folders, etc. can all benefit from the two-in-one approach and yes, you pay extra for that. That's life.
ok, i actually agree on that... i think asus did much better with their mars though... i mean think about it, that card will most likely outperform even the fastest 512core fermi :eek:
the 260+250 frankenstein card was really odd and pretty pointless...
systems with a single pciE slot could have gone for a 9800gx2 or 295 instead...
is it possible to let one of the gpus on those cards run physix actually? shouldnt be a problem right?
yeah! It's absolutely stunning what developers are able to squeeze out of the Xenos!
This in-game trailer for upcoming Halo: Reach is testimonial of how this graphics architecture was future-proof:
http://www.youtube.com/watch?v=kco1sYXGwYE
This thread reminds me of extensive Conspiracy Theory discussions.
99% Assumption 1% Unconfirmed facts...
Not really. We had a pretty nice and healthy discussion about MS in here, and the conclusion is fairly clear.
http://www.xtremesystems.org/forums/...d.php?t=243190
Who wants to sell a GTX275 - for $100?
(its obvious, but highlighting for folks waiting for Fermi)
The GTX are all great products with very good performance and features. I certainly wouldn't reject it if I won a couple as a prize. GT200/b is better than AMD's HD48xx lineup.. but that comes at a price. Even though GTX260/275 sell for a quarter of the price of their high end cousins, the chips and boards cost about the same
chip, transistor_gate_size, chip_area
G92 65nm 330
G92b 55nm 231 / 260?
GT200 65nm 576
GT200b 55nm 470 / 412 ?
Fermi 40nm ??? BIG
HD38xx 55nm 192
HD48xx 55nm 260
HD58xx 40nm 334
HD57xx 40nm 166
We know Fermi will be big. Like HD5870, 40nm, but much bigger. And that means more power. And more complex 384bit memory PCB vs HD5870's 256bit (just like GT200 vs HD4870). AMD will be able to steal market share with price war like before. nVidia will either lower prices really low making no profit (ie GTX275 for $150), or lose market share.
Because in the end, even if Fermi is 50% faster, makes coffee and does your taxes, it's still very big and expensive. AMD already flooded market with cheap DX11 cards, and with today's economy struggling to recover, very few will be able to justify the price tag over mortgage bills.
http://www.hardforum.com/showthread.php?t=1325165
Fermi is 23,4x23,4 3.2mil
am i lost? i thought this was the gf100 thread.....
hrm, that makes it exacly the same size as the gt200. i hope that power consumption is the same as gt200 or less, hearing about a 250-300 watt single gpu doesn't inspire confidence...
The "magic" of fixed hardware, focus/optimization on one single platform, closer-to-the-metal APIs than their PC counterparts, small memory footprint OS without bloatware, and last but one of the most important things - enough time, money and other resources.
Just look at what Naughty Dog/GG have managed to put out with Cell and castrated G71.
When is the REAL Farming going begin !!!! :(:shakes:
Waiting for Fermi is like waiting for AMD Bulldozer.
The chances, the sheer odds of either of these products being released before the end of 2010 is so remote, so improbable, I have trouble even saying it.
Everybody here remember 2004 - the big unveiling of THE 6800 ULTRA.
What a huge change that was. Night and day. Suddenly, you could actually play Far Cry. AA performance was immense. 35GB/s 256bit memory... seemed like something pulled out of Star Wars: Attack of the Clones. And 16 "pipelines".. wow it seemed like SO MANY.
Fast forward a bit over 5 years later. 256bit HD5870 features over 150GB/s. 1600 SP. 384bit Fermi will be around 200-240GB/s. Seems like hitting a ceiling since don't see anything beyond GDDR5 on the map and only option is 512bit.
But the surprising thing is, all those BILLIONS of transistors, yet its still just little simple textured triangles. Voxel didn't catch on. Ray tracing is still a pipe dream. Nobody even makes use of geometry morphing. Its approaching 20 years after Terminator2 and Jurassic Park, yet where are all the photo realistic virtual worlds?
Hoping the industry won't sour like the Simpsons.. 10 years from now playing as Rodriguez in Call of Duty 17 - same old point and shoot.
Good post, even with Fermi's new architecture makes me wonder how much more can they really squeeze out of a refresh - 20%, 50%?
I think pretty soon PCI Sig will have to up to PCI 3.0 for another 100W eek seems like there's no other way to avoid this architectural/performance ceiling other than to give more raw power to the cards.
well, you have the following options.
512 bit, higher GDDR5 speeds (7-8 Ghz), more cores, more ram etc., 28-20nm shrinks and lower etc.., i think we have not reached a ceiling.
good post :)
Photo realistic gaming won't happen in our lifetime. "little simple textured triangles" will be around for a long time to come - drawn in numbers they work fairly well. When all is said and done, gaming graphics are still an absolute galaxy away from where they were in the days of T2 and Jurassic park.
Before we get carried away with this "nostagic trend", let's put things in context.
Doom was released in Dec 1993, roughly 16 years ago. Do take a look at that game and compare to where we are today. I think that wasn't bad for 16 years.
also take a look at the Fermi car raytracing demo; now zap 16 years forward; it's not unlikely that instead of one car raytraced we'll see a whole game world ray traced at playable FPS with good resolution
this will be JP/T2 territory :)
and that would be?
i just read the last page and its still the people claiming it doesnt exist because they personally havent witnessed it while others insist that there def is something wrong.
your one of them religious types arent ya? oh lordy lord, make mah miracle happen! :D hehehe jk
hmmm i played far cry on my 9500np@9700p :shrug:
well, the jump from doom to real 3d games was amazing, everything after that was... well less than i expected...
it makes perfect sense, as we approach reality, the amount of detail that needs to be increased to make a notable difference grows exponentially... a smart way around this would be to actually track where we are looking on the screen, and only render that focus point in high detail... we wouldnt really notice if the rest of the screen was blurry, our eyes arent good enough for that :D
the problem is just that it takes some time to render a frame, and i dont think we can adjust the focal point of the frame that quickly...
I'll raise your bet and say it happens in the next 4 years. Well within any of our lifetimes .....unless you get hit by a photorealistic bus of course
Feel free to quote me when the time comes.
We'll also have robot servants within 10, to the elite at least. Already exist btw.
IMO gameplay is far more important than realism. All the effort being put into photorealistic rendering for gaming is entering wasted resources territory. And the heavy price tag are likely starting to show diminishing returns for IHV's and ISV's. Move onto immersion now, then worry about the ultra-fine details. (I bet Avatar isn't anywhere near as impressive on a 20 inch monitor)
A game 5 years old can create a much more immersive experience on a multimonitor display than the best rendered games of today. Makes sense when the ultimate goal of photorealistic gaming is immersion.
Add good gameplay to the immersion of multimonitor and its a winning combo IMO. 3d on a multimonitor setup would probably even better, but it would have to be done in a way that doesn't involve glasses. That just kills it for me, since I where glasses already.
Yet the evolution of graphics stopped in 2007. Crysis's level of graphics has yet to be reached and that's 2.5 years ago. Now that consoles took over there is no incentive for (much) better graphics. The next gen of consoles will only match Crysis level of graphics and that would be by 2012, half a decade after crysis itself.
PC gaming is dead and that means that an industry which was moving in the pace of an F1 car now it is moving no faster than a donkey. It like a switch gone off. I don't think that a lifetime is enough for those who wish to see photorealistic graphics, a millenium would be more accurate...
i wouldnt be so sure... the pc segment is about to collapse, yes, but pc gamers wont just dissapear and go back to inferior interfaces and image quality... the market will continue to evolve, at console pace unfortunately... so it will be very slow steps... but big ones...
No.
For example, Crysis.
So, let's say it represents the current level of our gfx tech advancement. Let's try and get it to the realistic level.
All the modern accelerators even in multi-GPU setups struggle with Crysis with proper realistic resolution (4k x 3k).
So add a year for the tech to catch up at least (++ computing power requirements).
Texture quality is also lacking. Need 2x-4x higher texture quality. 2-3 more years (++ computing power requirements).
Geometry... is WAY behind. If you want photo realistic image quality in a 3D game you need 10-20x more polygons at least. That's 5-6 more years till it's actually possible to render that (++ computing power requirements).
Current lighting is crap. We need raytracing. Realistic raytracing with huge resolutions won't be possible really soon, 5 years at minimum (++ computing power requirements).
Summ up all the hardware performance requirements... this is quite a huge jump needed in order to keep a decent frame rate of such a realistic game. This can not be achieved in 2-3-5 years, for sure.
Once the hardware is out, we can create such a game!
And now imagine creating such a game. The content. Each model would take a TON of time to create. Such a game would take 5 years to develop for a HUGE studio.
So don't expect anything within the next 10 years for sure.
Sure, I was bit over the top to my speculation. Still my line of thought stands considering that the PC segment was the training ground for next gen graphics. Now that game devs are completely untrained to the new techniques introduced by DX10 and DX11 (since there is no incentive not to be so), they would have to develop those skills from scratch by the time that the next gen of consoles will be released, which won't happen overnight.
X360 looked impressive at first but that was because developers were using techniques which they already knew from the PC arena (DX9, shader 3, HDR), X720 (or however it will be named), it would be far less so; since games are still written in DX9 code there would be no prior experience for anything new by then...
Maybe it won't take a millennium but certainly more than a (normal) lifetime to see photorealistic graphics.
Your wrong about the bolded part.
Right now, a 3d artist, when creating an object/character in a next gen game creates a really high-poly, photorealistic model, using 3dsmax/maya and zbrush or mudbox for finer details, than creates another identical, low poly version of that model (which will be the in-game model) and bakes all details in normal maps, displacement maps and Ambient occlusion maps from the high poly to the low poly model, so in game, with proper shading, the result looks good even if the geometry lacks.
So, in the future, you won't have to create the extra low-poly model, and bake everything from the high to low version. You will just create a high-poly model and use it in-game.
You will actually cut down on production time.
here is an example, made by Vitali Bulgarov.
High poly model, made in XSI and fine tuned in Zbrush. All the small details are actual geometry, everything is real and not a shading trick (bump maps etc..):
http://www.bulgarov.com/images/rabag...h_front_hp.jpg
Low poly model, with all maps baked in to it from the high poly (normal maps and ambient occlusion maps) + color textures.
http://www.bulgarov.com/images/rabag..._wireframe.jpg
You do realise that high polygon models will take 4-10x more time to create, right?
Look at the movies, that's not actually realistic gfx yet, cause you can't really come closer and look at everything carefully.
They take 5+ years to draw and render (Avatar for example), and this is very little content comparing to any interactive game, cause they only create the stuff that makes it into movie's frames... And for 2-3h of entertainment only...
zalbard, have you read what i just said in my previous post? read again.
Right now, every object created in a game is created in 2 version, a high poly one, with million of poly's and a low poly one, between 200-300 polys or 5-10k for characters.
So in future, with better geometry performance, artists won't have to go through the extra step of creating the low poly version, they will just do the high-poly version.
And avatar started to be worked on around 2005, so around 4 years in production, but a production pipeline in a movie is way more complex, with a lot of research, inventing new ways of doing things.
You cannot compare a movie production pipeline with a game pipeline.
But better geometry will mean faster workflow, the low poly models are the hardest to create, since you have a small poly budget and you have to convey that into the lightest shape, with the best detail reproduction possible, while also being carefull how you do the model so you don't get weird deformations while animating, since the model is low poly and they will appear.
Also, in movies if you make a forest, you have to make hundres of different trees, you cannot make 2-3 models and duplicate them. In games you create 2-3 models of trees, 10 max and then you multiply them and that's it, the forest is done. And so is the situation for any model/prop in-game. It takes a lot more time to create those 2-3hours of entertaiment for a movie than to create a game.
PS: i work as a 3d artist, trust me, better geometry performance in the future will mean faster workflow.
well, for the moment this workflow with low and high poly model will continue. But, it will take advantage of tesselation to get rid of the low poly looks.
You have the high poly version and you extract a displacement map from it. You use it on the low poly version, the in-game one and with the aid of tesselation and displacement you create geometry on the fly and enhance the low poly model to mimic the geometric look of the high poly.
And since tesselation is distance dependent, you get better performance since you only need dense geometry in the near-by area.
Even in movies they use medium detail models (so you never see weird geometry on them) and use displacement to create small stuff like skin pores etc..
Frankly, i don't believe we will see actual full blown models in games or movies, it wastes to many resources. We will see that games in 10-15 years will start to look closer to what movies look, since DX 11 introduces tesselation and displacement. These 2 features will make possible for games to move towards movie visuals in 10-15 years, depending on how hardware and software API's evolve.
Photo-realistic.
Not physically perfect and exact to real life environments. Just has to look like it.
Slap some better lighting and massive AA onto gran turismo 5. That's photorealistic.
http://generationdreamteam.free.fr/a...alLifeJuly.jpg
You're thinking of hyper-realism.
Not photo-realism. We're practically there for photorealism right now at this very point in time.
If the brain can mistake something for being real, that's enough to qualify.
We're not going to be taking electron microscopes to game footage to verify their physical correctness. There comes a point when you cross the barrier of what realism is in terms of game graphics.
I think it was the 4k x 3k resolution comment. You can't view a photograph or video at 1920?
no... it's not photorealistic, if it were, you wouldn't be able to see difference between a photo of the real things vs a screenshot in-game
getting close != there yet ;)
photo realism in anything other than people/animals is 1000x easier. racing games will be the first to reach photo realism, everything else will be a decade behind. and due to just raw power needed, photo realism in a small group of people will be done quicker than something like an MMO.
Dear audience, while we're excitingly waiting for the mystical Fermi, the topic-du-jour is photo realism gaming ;)
The Fermi Paradox
source: http://en.wikipedia.org/wiki/Fermi_paradoxQuote:
The apparent size and age of the universe suggests that many technologically advanced Graphics Cards ought to exist.
However, this hypothesis seems inconsistent with the lack of observational evidence to support it.
:rolleyes:
If we see FERMI enter the consumer market before 2011, I will autophilate.
I love how you say "this game," as if these advancements would only be used in only one game before it anyone else used it. I also love how you seem to be adding the years together, as if one company and one will do this. Thats not the way it works.
Even if everything took as long as you said, its within a 10 year time frame.
People pull the Crysis theories out of your head. That game was no massive advancement, just a lot of semi recent methods put into play
and optimized visually. No one is doing, or has done this because in most cases this doesn't make them money.
I guarantee you in exactly 1 year hardware will have caught up with Crysis, and thats an awesome thing, thats technology moving faster than you think it is.
Grats, you have completely missed the point of my post while picking on my use of English...
I went ahead and heavily edited my post to reflect this. :rolleyes:
No.
Hardware might be, yeah (barely, probably), but the developers will not have enough time to actually finish a serious game with such a level of gfx by then, the creation process itself will take a while as well.
Yeah, sure, there are so many games that look a lot better these days! Oh wait... :rolleyes:
well then lets ditch tessellation and go with svo. this is a very powerful wasy to increase texturing and geometric realism all in one.
http://www.youtube.com/watch?v=VpEpAFGplnI
Guys as I would venture off topic aswell, I can't wait for HL 3. That should look incredible. Anyone else wondering why it's gone all quiet with epi 3?
Btw Crysis is definately photorealistic at times and def would be with a bump in res.
This is what happens to a thread when there is no real news to report. Give us something nvidia!
I did nothing to pick on your english. Didnt seem to be any mistakes in the previous post, at all.
I'm curious, you do know much about the industry, or are these guess assumptions out of thin air?
^ did you purposely completely ignore that line.Quote:
No one is doing, or has done this because in most cases this doesn't make them money.
lets face it, Crysis was not a very good game, not terrible, but everyone will agree with less focus on its trying to be photo realistic and
more focus on gameplay it might could have been in a top 10 great games of all time.
Crysis is popular and sold what it did cause of its hype, no one else is going to be able to hype graphics like that and sell copies for a while.
This doesnt mean i dont like or dont respect he game for what its done, its pushed hardware and thats a great thing, but this explanation is the reason my above quoted statement is true.
You sir, are the one missing the point, the point of your post was clear. But to an extent I do agree with you, exact photo realism of life in a game
I see not happening for some time, it will come close sooner than we think, but i could be wrong. But the advancements being worked on at the current moment are looking bright.
to add, what in the world does this mean.
4kx3k, where did you even get that number, I will tell you, this subject is a specialty of mine, and theres really no such thing as a proper realistic resolution.Quote:
All the modern accelerators even in multi-GPU setups struggle with Crysis with proper realistic resolution (4k x 3k).
that is based on so many things its ridiculous.
<annouancer> LOOK ITS FERMI!! I am not sure how many here know about tessellation, but fermi has it! Tessellation!! TESSLLATION!!!
<man holding an anti-sign> OMG TESSELLATION!!!
<announcer> Say tessellation 10 times real fast and get a free card!
<excited man> tessellation tessellation tessellation tessellation tessellation tessellation tessellation tessellation tessellation tessellation
<excited man 2> I can't believe this is happening! I hope they bring back Elvis!
THE LIMITS
Why are we still powering games at 100fps+, where fps looks like a cardiogram?
What happens when #triangles is > #pixels? 19x10 .. is 2MP. A 100 mtri game would have 50 triangles/pixel... inefficient.. you think so?
What happens when #SP is > #pixels? Just a couple quick years and we already have 1600-3200SP. Once we reach millions of SP, we'll have enormous inefficiencies.
1. First of all, Crysis is over 2 years old (Nov 2007). I don't exactly see the likes of Batman Asylum pushing the envelope. The "saddest" part is that it took Carmack from id, 15-20 lines of code to implement mega-textures.. something even he was astonished nobody did yet.
2. Bigger texture and more triangles is an uphill battle of diminishing returns. ex. 3x bigger texture uses 9x more bandwidth/space. This brute force is wrong approach.
3. Every game, ESPECIALLY those like Crysis, use clever tricks to drastically reduce amount of work being done with minimal loss of detail. This is not a bad thing. This is why Radeon 9700 could do AA so easily.
4. Chumbucket843, thanks for sparse voxel example. THIS is what I want to see more of. More innovation. Not just more of the same. I think Fermi is VERY AGGRESSIVE step in the right direction.
PS: Gene/DNA computers are supposedly super at parallel tasks. So 3D gaming on "embroys" is a go?
Damn dude, you don't actually read what i said.
Your pulling stuff out of your imagination/personal assumption while i do 3d graphics for a living and still you actually think that high geometry will take a lot to implement.
Guess what, High poly models are created these days for every damn model in a game, coupled with a low poly model to be inserted in-game
You have no idea how a game pipeline works, so stop posting BS.
Don't take it as an offense, but you have no clue about this subject.
About high res textures, that is not such a big deal actually. Right now, they use 512pixels and 1024pixels textures. a 2k-3k texture usage will need video cards with 4-6GB of ram per gpu, but that is not actually such a big deal.
Main issue for the future is tesselation and displacement in particular and actually, we just need more render power so GPU can calculate displacement faster, a lot faster so we can use it everywhere in-game, but properly, not Stalker or Dirt2 style.
Maknig a game that has Avatar level of graphics will take a lot of time. It's not only about the technology, but also and mainly about hell of the amount of time to spend on the project and work. And of course money. Zalbard is right here it would take a decade to create a game that would look like Avatar at least. If it were otherwise we would be having Avatars every two months in pc games and in the movies too )) And it would definitely be making them money, if a project like that wouldn't have required that much of efforts and time thus bearing a load of risks to never pay off...Right?
It looks like you guys are saying the same thing but don't understand each other for some reason.
No, we are not saying the same thing.
He argues that if you want a lot of complex geometry in a game, than surely it will take a huge amount of time to create it.
I'm saying that in today's games they are already creating high poly models along with the normal, low poly in-game model. They use the high poly model to create normal maps, bump maps, ambient occlusion maps etc.. to enhance the shading on the low poly one. They are already creating high poly models for each box and each character in the game.
How do you guys think they create a normal map? with the additional detail? You cannot manually paint it, you extract it from the high poly model and apply it to the low poly model.
So, in the future, if GPU power will be sufficient, it will be faster to create a game since you can skip creating the low poly version of all models. More geometrical power for GPUs in the future will mean simplifying the workflow and get things done faster.
And also, VFX in movies are done by having huge render farms behind, with 500-1000 CPUs or even more, that's why you don't see that kind of work done in games, because it's damn impossible to do it, not because of time constraints but because we don't have the processing power to render such a game in real-time.
hmmm why? whats the agressive step nvidia took with fermi?
the ability to handle more geometry?
i dont think so... right now there are many games using many game engines which use different lightning models... the market of game engines is consolidating and this trend will continue... its less and less about the engine, and more and more about a set of objects and textures that come bundled with the engine... so you dont actually have to build the same thing over and over and over... there will be more and more objects that get recycled in various games... and as such, artwork wont require that much more time and money i think...
plus afaik a lot of work right now on artwork is spent on lightning things... manipulating the code to make the scene look the way you want... once you have raytracing for lighnting, you wont have to do that... unless you want abstract lighnting, and even then it should be very easy to do compared to ligghtning now...
that would be a good thing imho, we don't need 1001 barrel variations;)Quote:
there will be more and more objects that get recycled in various games.
That's exactly the point he was trying to make I guess but somehow went into the wilds of crysis. lol I understand that the main point of discssion was the impossibility for a long time to come to make a game that has some photo-realistic graphics (not just high poly models) or graphics like in the Avatar movie - well, everyone agrees on this one I guess, so thats why I said there is like some kind of misunderstanding going on )
yup, a lot of time takes right now to fake GI by using multiple light sources to give the impression of light bouncing through the scene.
With raytracing you have you normal light source, lightbulb, sun etc.. and the rest is calculated by the engine.
Because they do not affect everything. Nobody does a 4k texture for a rock, planks, leaf or some rat/cat in a game, they are only for enviroments, walls, backgrounds , some characters etc....
If you would have film quality texturing (4k textures for almost everything), then you would need a lot of RAM.
Or a different way to access a pool of system memory, fast.
Even though memory throughput to the GPU is not a bottleneck, the limited amounts are. I'd like to see AMD and NV invest in a new memory transfer protocol so that graphics chips can enjoy fast transfers from their VRAM but have a faster way to access system RAM than todays normal Memory caching methods. Since this is evident with yesteryears architecture (GT200 chokes when it runs out of memory) I can only hope that NV (and ati) look further than wait for PCIe 3.0 where basically doubling bandwidth to 32GB/s only puts it around 15% of the VRAM bandwidth.
Solving that bottleneck would allow for massive amounts of textures in game.
Actually Intel has a good part of Rambus's shares, and I doubt they'd cash them out anytime soon or at all.
This thread is sliding more and more away from original topic. :ROTF: