Yeah, can't wait for the adoption of tessellation. :yepp:
Printable View
Yeah, can't wait for the adoption of tessellation. :yepp:
Crysis 2 was already demo'd on the 5870's in EyeInfinity so if that's any indication, it might not be that demanding and/or its more optimized
Crysis 2 has two teams behind it, one for PC's, the other for consoles.
Yes but the 5870 six or whatever it's called seems to do quite a bit better in AMD's show off session than the standard 5870 does in reviewers and consumers PC's.
I'm sure GT300 will support tessellation. Otherwise it will be an absolutely terrible flop.
How do you people know the hardware tessalation instructinos are not physically built into the cuda cores? If that's the case, then gt300 would be faster. I can't imagine nvidia would make it software tessalation. Its porbably hard ware level just in a different way. Since the whole purpose of fermi appears to be multi-capable... I suspect the cuda cores can natively handle tessalation commands without another software layer.
both ATi and nvidia use software tessellation for dx11. it is superior to hardware tessellation because of performance per transistor. so technically it does support d3d11. fixed function is only useful for ROPs.
http://images.bit-tech.net/content_i...lysis/flow.jpg
Doesn't look it.
To put this to rest; a hardware tessellation unit isn't needed.
However, I would like to add this little tidbit from an Nvidia DX11 presentation:
http://www.hardwarecanucks.com/forum...326bfdc076.jpg
it is not, of course, but you can use idling units that are waiting for other units to finish their job to do some calculations in between.
Why do you think a program like furmark (<2MB) can put a higher load on your GPU than Crysis? because with crysis not all units are at work at the sime time and with furmark they actually are.
So, with a good driver optimization, you are able to redirect tesselation calculations to idling units, at least theoretically.
I don't know if that's how it's going to be done with nVidia's Fermi architecture however, but it's a way of doing it, I think.
Other way would be detect if tesselation is going to be necessary with an app, and if so, address a group of SPs to do the job.
AFAIK, Radeon HD5000 series cards have a hardware tesselation unit which makes all the fixed function tesselation work. They started to develope this as soon as they knew it was to be a main feature of DirectX (formerly DX10, then delayed until DX11), when they were designing the XBox 360 Xenos chip. Of course, the final DX11 version is wider than the previous one implemented by ATI (AFAIK, the DX11 tesselation is a superset of ATI tesselation), so previous ATI cards are NOT compatible with the new DX11 tesselation.
A dedicated hw is always more efficient in doing the speciallized task it is built for than more generic, flexible and programmable hw. You are going to use more transistors to do a task if those transistors are from a generic processor than if they are dedicated (especially designed/optimized to) to do that task. The advantage of using more generic hw, is that it can be used for different things (so if there's no need to do that task, there are no wasted transistors), and that you may balance the work load the way you want.
But a dedicated, specialized hardware for a task, is always going to be more efficient at doing it.
The question is... ¿has Fermi also a dedicated hardware to solve the fixed function tesselation, or it hasn't? If not, how big of an impact that means when the chip has to resolve tesselation, when compared with a chip with dedicated hardware to it (Radeons)?
Hmm here is hoping that eVGA's announcement this time next week is actually for the GT300 :)
It does sound like it is going to be a monster GPU (in areas of size, performance, power consumption and price), however it WILL be the fastest Single GPU card out there and that is all that matters to me.
I do not give two hoots about SLi or Crossfire GPU's as they have woeful minimum framerates, rely on praying to the driver Gods and also are usually loud and extremely power hungry.
John
I suggest you try SLI with two GPUs on an X58 system. Almost all games of the past two years play better in high res in SLI than a single card. Of the 20+ games I got only two come to mind that have real problems with SLI; Brothers in Arms Hell's Highway (framerate feels like 15fps) and Gears of War (problems with graphics flickering in rain levels).
I have yet to have any experience with the i7 platform :(
My only opinions of SLi were gathered from a nforce 790 based board and 2 Geforce8 GTS 512 cards Yes the 3dmark scores were high, yes the games did have high and average high frame rates... however they did have lower minimum frame rates and stutter :(
As for Crossfire... awful, just awful mind you this was back in the day of HD3850.
John
Sure didn't the old Radeon 8500 feature a Tesselator?
I really hope Nvidia's card isn't too expensive.
A good price/performance ratio is really needed.
the fermi 500 edition
500watts
500$
500 years before its out to public
nvidia make it happen pr wise LOL
or wait for the super duper heater dual gpu version and save on heating cost????
I don't know whether this has been posted before? From what it looks like tessellation is quite demanding:
http://www.pcgameshardware.com/aid,6...arks/Practice/
Personally, I was hoping for an almost free lunch on the new hardware.
at least with rocks on the ground I believe CE2 did a very good job with Parallax Occlusion Mapping, quite a good final effect and performance hint was negligible.
http://img90.imageshack.us/img90/1685/unbenanntfz1.jpg
Yup Crysis did a great job (very pretty game still need to finish it though)
Pics from my comp
http://i7.photobucket.com/albums/y28...enShot0605.jpg
http://i7.photobucket.com/albums/y28...enShot0606.jpg
http://i7.photobucket.com/albums/y28...enShot0274.jpg
tesselation in unigine looks way better than occlusion mapping imo, and yeah its too bad that the amount of added polygons cant be adjusted...
would be cool to have a slider for it so you can basically adjust the level of geometry detail! that would be sweet!
nobody said it didn't look better, it does, I just stated that POM did a good job with small performance hit.
http://www.crymod.com/thread.php?thr...tuser=0&page=8
those are the latest pics of RL2 on that page, make your mind up if you think it looks better or not :P
That looks incredible. Hopefully 5870x2 or Fermi can run it at great framerates at 1920x1200. 5870 is almost there gtx295 is even closer but I'm looking for a little bit more. I really want minimum framerates to never drop below thirty.
Damn guys still no news about Fermi?
1 week and we are in november :D
so when is fermi supposedly launching? still no launch date announced?
so 3 weeks? whoohooo :D
what 3 weeks?? fermi got canceled!! no more fermi.. no more nothing.. nvidias closing doors soon.. after 10 years 3dfx dejavu.. remember voodoo5 6000 ?? fermijavu!! :(
why? amds 5800 series decimated nvidia and they got no chance to bounce back
dont believe me?? soon youll all find out/get to read the news :(
12/31/2009 they will seize to exist
but why not? months are short... according to jensen at least ^^
he told the press to wait just a few short months until fermi comes out :D
im gonna use that next time my boss asks me about finishing my work... oh just a few more months, geez dont be so unpatient! :D
well, im convinced there wont be any launch in 2009, actually they might not even make it in q1 either if a2 still doesnt work...
im just trying to stay open minded and look at both, pro and con.
nvidia pro = launch in 3 weeks
nvidia con = launch in q1 or even q2
reality is usually somewhere in between :D
:rolleyes: oh come on, stop it already :P
As much as NVidia has annoyed me recently. Starting at the 9xxx series, I really do want Fermi to be an amazing card.
We need the competition and I've always used NV, until the 4870x2 anyway.
If NV goes we're in some serious trouble, besides them there's no real competition from anyone else. :(
NapalmV5?? for reals man or are you just board...or are you holding a card in your hand
the guy even gave an exact date :D
btw... look at the nick, napalm v5
3dfx's napalm project, and voodoo5 anyone? :D
If only 3dfx was still around :(
I really do miss them.
Yeah, it's true. nVIDIA is being shut down in December... for... vacations :D
However, I may have some bad news.
A lil' birdie told me that a November launch seems impossible at the moment, but there's a possibility of a limited quantity launch in December.
Decent quantities in mid January-early February.
great, so now we will be dealing with 1 top video card prices!
I've never seen napalm spread :banana::banana::banana::banana: before. However, I hope he is this time. :(
Q1 was correct then...
*sigh* And when its finally realeased there usally is a huge shortage of the card and we have to wait even longer...
What a load of garbage. Nvidia isn't going anywhere. Worst case, we only see a paper launch in Q4 09 but Nvidia will not go out of business. If Nvidia was close to bankruptcy then AMD is infinitely closer.
5800 did not decimate anything. GTX295 is still faster than the 5870 and the 5870 isn't anywhere close to twice as fast as the 4870.
I think you do not realize he was just BS-ing ....
why would nvidia go bankrupt? even I make more money than AMD.
:D:D:D
exactly :up: finally ati have caught up to nvidia but not buried it already
my post was @ saaya/ati fanatics who all of the sudden are interested in fermi or who think fermi will be a slouch or a no show until next summer
we should embrace both ati&nvidia and i hope they leap frog each other more often than once/twice a decade
:toast:
5870 is faster than a 295. the image quality is no comparison. you could say the same with gt200 v. rv770 too. sometimes being objective is nice:p
hmmm from what i remember reading through a bunch of reviews, a 5870 is about 90-95% of a 4870x2, hows that not anywhere close to it?
i think the 5xxx series is overpriced, i want gt300 as badly as anybody, if anything to drop ati prices, and depending on price/perf to get a gt300 based card myself. i was spending as much time speculating in the 5xxx thread before the cards came out as i do in gt300 threads now...
hmmm are you sure from what i remember from all the reviews it was 295 -> 4870x2 -> 5870 is most scenarios?
Hey, now that nvidia has stopped manufacturing 260,270,285 gtx, so will the new card takes over. Samone know any information about the card / cards and when they is coming ?
Obviously yes, new cards - new performance :P For now they will release new gpu with evga, so will be replacement for old cards i think
yeah but keep peak flops in perspective. rv770 is 1.2 teraflops and rv870 is 2.72 teraflops. something is bottlenecked! i think its off chip bandwidth. they are wasting die space with all of those shaders. maybe cache or a wider bus would help.
the framerate is lower but the quality is higher. fermi will also improve texture filtering. AMD says its free but i am pretty sure that number assumes the whole texture is cached which is unlikely.Quote:
hmmm are you sure from what i remember from all the reviews it was 295 -> 4870x2 -> 5870 is most scenarios?
http://www.anandtech.com/video/showdoc.aspx?i=3643&p=13
Because the 4870X2 isn't twice as fast as a 4870. And since the 5870 is slightly slower than the 4870X2...
No, the GTX295 is faster than the 5870, and while the 5870 does have a near perfect AF algorithm, GT200 still has damn good AF, way better than R600 through RV770. In practice it is very hard to see the difference unless you run that tunnel program that is designed to show the difference.
nvidia does handle AF more efficiently but if you compare the 5870 to the 4890 the hit is much worse. its double the speed but gtexels are the same as 4890? so much for them telling us it didnt come with a performance hit. ATi has historically been known for doing graphics efficiently. this card completely forgets that. SSAA and the new AF are there to prove it. brute force methods like those might be inefficient but the image quality is great. i am expecting fermi to do the same. the cards are so powerful it doesnt really matter.
http://images.bit-tech.net/content_i...ysis/texaf.png
In case you did'nt noticed.
http://www.fudzilla.com/images/stori...ws/nvda300.jpg
http://www.fudzilla.com/images/stori...ws/nvda300.jpg
Probably renamed G2xx chips, maybe with DX 10.1 support...
:rofl:
http://img62.imageshack.us/img62/613...f6d936eaf9.jpg
NVIDIA is scheduled to release ExaScale machine architecture in 2017,Bill Dally,chief scientist of NVIDIA at the Global Road Show organized by Institute of Process Engineering on Oct,27th.The way of introudction of architecture of eight-year later seems a bit of suspicion.However, Nvidias ambition to enter the general-purpose computing market is clear.
According to Bill Dally, NVIDIA will put 2400 throuput cores and 16 CPUs on a single chip with 300W TDP.It is well known that NVIDIA will make CPU sooner or later.This is obvious evidence in definite roadmap at first. However, Huang stresses that NVIDIA will not make CPU, while the Bill Dally also takes Tegra as a shield in answering relevant questions.
This CPU + GPU chip from NVIDIA has also confirmed that they will achieve Fusion in the opposite direction of Intel and AMD.
Each throuput core include three single-precision floating-point unit and a double-precision floating-point unit, the chip is expected to provide a total of 40T single-precision floating-point processing power and 13 double-precision floating-point processing capabilities. Node using this chip also has 128GB of memory, and 512GB of phase-change memory, or flash memory as a high-speed local memory. This node architecture is very similar with current super-computer.Although the figure is amazing,this product can be realized as long Moores Law takes effect.
The wole system is made up of 128 cabinets,which can provide up to 2Efplop computing capabilities with a TDP of 10W.
NVIDIAs future product appears to be powerful.However,Green giant wont give detailed time frame just like the preparation of Exaflop, phase-change memory, heterogeneous processors.
source: http://en.hardspell.com/doc/enshowcont.asp?id=7250
Fermi GF100 to launch by early December
If not in very late November
Fermi, Nvidia's GF100 40nm DirectX 11 chip is selling great even though Nvidia still has to officially launch it. Sources confirmed that Nvidia is taking pre-orders like there is no tomorrow, but at this time Nvidia offers no guarantees when the chip will hit the market. Everyone expects shortages due to heavy demand from day one.
The original schedule of late November might skip in the first week(s) of December, but from what we learned over the last few days, it was always late November to first days of December.
Nvidia ordered much more 40nm wafers for its notebook and desktop entry level chip as well as for Fermi, as Fermi should sell good in the server market for parallel computing, workstation use and, of course, as a computer games graphics card. The server market will be prioritized as Nvidia can make more money on the same chip.
Performance wise, once again we can confirm that multiple sources strongly believe that a single core Fermi will end up significantly faster than ATI’s single core Radeon 5870.
http://www.fudzilla.com/content/view/16185/1/
uh where are these stupid cards (angryface) at this rate ill wait it out till 6XXX gt400/gf200
Nvidia finally gets Fermi A2 taped out
7 weeks into a 2 week process
IT LOOKS LIKE Fermi A2 silicon has finally taped out, so the timetables are a little firmer once again. There is no chance of a real launch in 2009, making the chip a shining example of Nvidia's engineering mire.
Lets recap a bit. We said that Fermi, then called GT300, taped out on about Work Week 28 (WW28), and it did. We said that silicon was due back in 6-8 weeks, and cards could possibly be shown publicly on Oct 1. We admit that we overestimated Nvidia's ability to engineer its way out of a wet paper bag with a map, flashlight and a bunch of wood screws here.
It hasn't shown a card yet at all, yields are miserable, but it did in fact get silicon back on either WW35 or 36, which is right where we said it would be, almost to the day. The fact that yields were a joke, coupled with 'puppy' inflicted own goals, made things downright laughable for Dear Leader and company. Nvidia didn't have enough working dies to do the testing it needed, much less show some off for PR, so it faked that.
Back to the chips. Normally the debug and respin process is about two weeks or so, a marker that should have been passed before not-Nvision. As of mid-October, we heard that NV didn't know what the problem was, and that it was going down the metal stack to desperately try and figure it out. People inside honestly honest green said things were rather desperate.
The latest word was that the chip was set for a WW42 tapeout, or was imminent that week. Let's give Nvidia the benefit of the doubt that it did tape out, something anecdotally confirmed by Fudo saying that the chip will be out in early December. If by 'out' he means A2 silicon samples, then it will in fact be out in December. If you use a definition of 'out' grounded in the reality that humans occupy, then no chance.
Assuming that Nvidia parked a few wafers to speed up the next hot lot, it could indeed have a few A2 chips in late November, and boards to show a week or so later. The go or no-go decision could be made on December 1 if all goes perfectly.
From there, if the risk wafers did not need to be scrapped, you are about six weeks from production silicon, best case. Add another two weeks for boards, and you are into February. Given Dear Leader has scheduled a press conference at CES on January 6th, that should give you a good idea of the public timing. For those curious, although Nvidia seems to have forgotten to send us an invitation to their yawner, it will be at noon at the Venetian Hotel on January 6.
Anyway, if all goes perfectly, we are looking at February for the start of real quantities. There will be A2 silicon before that, but nothing in real quantities. Anyone who says otherwise has ulterior motives or doesn't understand how the industry works.
During not-Nvision/GDC, Nvidia was telling people who mattered and AIBs not to expect Fermi until March. Internally it was saying May, but the AIBs were not told that. About the A2 tapeout time, Nvidia's AIB messaging was changed to April or May.
It would be safe to read into this that the A2 stepping is not going to cut it, and an A3 spin is on the cards. Eight weeks added to early February gets you into March, so that lines up nicely with what Nvidia is telling people.
Another bit of anecdotal evidence is that there is no sign of the other four GT300 variants taping out. Those are usually kept in house until the first chip is fully baked, and the fixes are backported. If A2 would have done the trick, there would have been much more movement at TSMC on the variants, and there does not seem to be.
To wrap it all up, A2 is out, but it took about four times as long as it should have. A3 seems very likely, and the chance of anything more than a PR stunt launch in 2009 is zero. Don't believe the hype, Q1 is best case. When you don't have product, spin.S|A
http://www.semiaccurate.com/2009/11/...-a2-taped-out/
fermi WILL have fixed function tessellation.
its a couple of questions down.
http://forums.nvidia.com/index.php?s...ic=109093&st=0
So Nvidia has been sitting there a$$ for a long time,
No more new news about Fermi ???
From xbit:
"...the graphics chip designer is doing its best to have working samples of flagship graphics card based on the Fermi-G300 (NV60, GT300, etc) graphics processing unit (GPU) at the Consumer Electronics Show, which takes place in Las Vegas, Nevada, from the 7th till the 10th January, 2010. "
source : http://www.xbitlabs.com/news/video/d...w_Reports.html
Still completely up in the air, in my opinion.
Well I guess we find out the 7th...
Did anyone notice this.
EVGA and NVIDIA would like to invite you to a new graphics card launch at the NVIDIA campus in Santa Clara, CA. No it’s not based on the new Fermi architecture… we are still finalizing those details, so stay tuned. It’s a rocking new graphics card designed by NVIDIA and EVGA to take PhysX to the next level.
http://www.evga.com/articles/00512/
Does that mean that they are going to launch the GT300 soon? :shrug: