either your breakign NDA or your just assuming.. :shrug: silly both ways.
Printable View
The same reasons why Intel doesn't sell i7 for 100$. Current market prices.
Even if it's going to be just as fast as 5870 (which is around 450$ atm) they will still charge extras for CUDA, PhyX and 3D. Just because they can (and they already do with their other cards). And if it's going to be slower than 5870 at launch - HAHA, Nvidia! :rofl: But doubt it will happen.
prices for CPUs have been going down steadly, only GPUs and SSD drives have been stuck or inflating. the SSD thing is due to memory prices, but ATI has been stuck at 400+$ due to no competition. even if GF100 is 500$, why shouldnt ATI bring prices down? which in turn will make Nvidia bring their prices down too.
are you seriously thinking that in the next 6 months all GPUs will continue to be priced this high?
Ok so they have stated that Fermi isnt as fast as a 5970. Ok the 5970 is 2 5850's in Xfire. So at most you could put 2 5970s in Xfire. However one could theorize 4 Fermis in Quad SLI, which could prove to be faster.
I know the down side would be that 4 cards would suck 1200W of juice, and the cost would be astronomical.
However I would install a Fermi over a 5870 or a 5970, due to driver issues.
I'm not a fanboy one way or the other...Love ATI cards, hate the drivers. Love Nvidia drivers, hate the card cost...
The BIG question is, how is PhysX and volumetric particle simulation used in the demo Fermi specific? ie Can GF8/200 run this demo - probably yes.
So far I've only seen Fermi running UnEngine and this demo.
* Either everything else (50-100 games + all OS/applications) are running perfectly already.
* Or, not so far along in testing/debuging and these two are among few that are "safe" to run (ie fast and error free).
I own a 5870 and I've had 0 driver issues. Sure, there were some performance issues, but I am pretty sure Fermi will need quite a few driver updates improving performance till it works 'as intended'.
On the other hand, I had 8800 before, and some of the later drivers were far from being glorious in regards to the game support.
Both companies have semi-good driver teams. However, both screw up time to time.
Nope, the 5850 (Cypress Pro) has 1440SPs. The 5970 has two 5870 (Cypress XT) with 1600SPs each but clocked @725Mhz.
So it's faster than two 5850 in Xfire and slower than two 5870 in Xfire.
Obviously you can OC your 5970 to 5870 clocks and reach 5870 XFire performance.
* Or maybe you're reading too much into demos of pre-release hardware.
There's no law that says companies should show you hardware running 50 different games before launch to prove that it works. The only reason we got anything at all is because it's late, for G80 we got jack squat before launch so I find the complaints quite strange.
I'm sure it will work with most games whether they show us proof or not, after all thats pretty much what its designed for.
BTW, Next week? oh you mean the 14th?
Yeah the 14th is when my next news letter should be coming to :)
I wonder if nVidia being late to the game means that developers have started to cater more for ATi's marchitecture than nVidia's marchitecture?
IMHO most games seem to run better on nVidia based cards, not really through any fault of ATi hardware, but through the fault of "The way it's meant to be played". You have to wait a while for ATi drivers to cater for games. Hopefully we will see that with the new generation cards are just too powerful for this nonsense to effect us.
John
So on the 14th, NVIDIA will "launch" the new Telsa line with a big press release?
A few of my friends have picked up the 5870, and had to increase the idle speed to 400, to get rid of artifacting.
I'm not trying to argue one way or the other, I can only speak from personal experience and I find that NVidia drivers are more stable.
And I agree 100% that any driver development team has bumps on the road to glory.
http://newsimg.bbc.co.uk/media/image...britainjpg.jpg
That's today lol. It's hell enough.
:yepp: But the first exception I had was the HD5970. For 600€ I paid (680USD) I had so many problems with drivers and probably bios that I swapped it with 2x GTX295... 2 weeks @ home and already swapped. Was expensive, and absolutely not mature. After 9.12 drivers, still some problems, and you have to tweak some settings manually to avoid lockups and crashes...
So what. ATI released some HD5970 with beta 5 drivers, and 2 a high end price.... in the watercooling forum you can see that some people are affected by issues that don't have to happen on high end products like this. It seems that these cards were not really mature, driver, bios or driver I don't care, I was furious and never do this stupid error again. Early adopting is not for me. I'm just worried about ATI to sell a card still bugged and looking as a beta sample, customers are NOT beta testers if they pay for a product and a service.
So... I'll probably not do it again and for Fermi I'll wait for reviews from others!
:clap::yepp: For average use requirements, today's products for today's games are already oversized. We are only human, Being able to see the difference between 123 and 135fps in a game... except on a OSD from RTSS... or on Benchmarks... I'm not superman.
So it depends on what people are looking for. But what you say for graphic cards is so true for X86 based systems for example. Without virtualization, a Nehalem based server for standard purposes will be under utilized.
Finally maybe some games like Crysis will still be exceptions... but for the rest...
What's the most important thing for me is to have performance + stability. I prefer to have 10% less frame rate, but no driver issues, no temps issues, no bios issues, no issues because of non mature new features (powerplay?).
In the coming months, what I'll be looking for is how reliable the solutions are. If Fermi offers less power @ same price than ATI, but has a better reputation for stability amd maturity, I'll go for a Fermi. Or... keep my GTXs... will depend on how performance increase and how games and videos take benefits of this.
Yes, I gotta admit that the release of 5970 was terrible.
I still have no idea why they hurried to release the card so much without any proper drivers whatsoever.
Ditto, I like nVidia's drivers better for scaling, compatibility with older games, and general stability + color settings, as well as the control panel. ATI's cards are nice but the drivers usually give me issues. They're both good, but I personally prefer an nVidia card when available even if it costs a bit more due to this. (I own a 5870 right now and owned a GTX 280 prior).
Someone at B3D noticed that all the GF100 boards have 2 sets of 8P power running to the 8P connector. I've checked the EVGA 3Way SLI setup and it has the same for all three cards..
I say, fry me some eggs.
pics:
http://www.legitreviews.com/article/1175/1/
Seems to rhyme pretty well with the pcworld.fr story that they can't go higher without breaking 300W. To quote someone else, "GF100 is a dog..."
Did we discussed together before? I don't remember this so Hi, nice to meet you there, I'm Cyril and I'm happy to be there sharing informations and thoughts with you.
And before talking about me Fuding there, you should know 3 things:
- I'm working for one of the 4 world major IT manufacturers as ITAC, and
I HATE Fuding because it's opposite of what I'm doing in my daily business, absolutely not compatible with my ethic @ work and in the real life , I'm neither a sales guy nore a marketing guy. I'm there to share, have fun, learn and have objective and pragmatic discussions with others, I'm not there to fight or to fud
- Did you already had a look into the ATI forum and into the watercooling forum? If not, a short refresh about latest news, you'll see people, me included, trying to find workarounds... with this HD5970
Here: http://www.xtremesystems.org/forums/...=241616&page=2
Here: http://www.xtremesystems.org/forums/...=239276&page=3
Here: http://www.xtremesystems.org/forums/...=240358&page=2
Here: http://forums.amd.com/game/messagevi...&enterthread=y
Another one on the EVGA forum.
- Last bnl: I thought I was editing my message, but instead of that, I was posting a new one. So it's not wanted to have a reply only with a smiley.
So please...don't teach me about what Fud is please. And moderate yourself when you talk with people you don't know, you're on a forum yes but behind keyboard there are real human people. Keep some critical distance and try to stay objective with me and others. I don't know you so I hope it's some kind of misunderstanding because of this lonesone smiley. Hope everything is crystal clear now about my intentions. If you have a look on previous posts in this thread you'll see that I moderate my conclusions after the press news I gave about high temps. I'm not that kind of guy with kid's maturity and negative mindset.
;)
PS: and remember: I'm french and english isn't my first language. It's not alway easy for me to be as precise and fine with English as with French. ;) But I take care of details as far as I can to avoid such situations where language may provoke misunderstandings. Why would I loose my time to make efforts speaking in english if it's for being a troll? It would be easier to do it on french forum right?
any news on Fermi at CES 2010?
surprisingly there's no threads on CES 2010 .. no one's going?
Charlie has said that the final card will only have 448 "cuda cores"?
NDA will lifts few days after 14th, today is main GeForce 300 Deep Dive meeting. I can say this: GeForce 300 has higher perf then GTX 295 with lowest power consuption. Look at GTX 295 vs HD5870/5970 in reviews and you can calculate +/- perf of 380
http://tpucdn.com/reviews/ASUS/EAH59...es/perfrel.gif
Try tecpowerup.com for example. There is GTX 295 under HD 5970 about 14 percent, better then HD 5870 +/- 9 percent. If will be GF 380 beter then GTX 295 about +/- 10 percent, will be 5 percent under HD 5970 and outperform HD 5870 +/- 20 percent, sounds real? I think so.
what are specs of 380 or wait nvidia didn't finished them lol and if power consumption is so low why do they have 2 sets of 8ps ?
ps taken from your previous post : GeForce 380 has huge power consumption, but performance is only few percent under HD 5970 and far away from HD 5870 perf. Trust me, will see next week on many servers, when NDA for info lifts! lol
The key here guys is perf/watt, perf/$ and OCability. It is a given that nvidia is preparing a monster to get the blue ribbon (perf crown) and it certainly should, since its coming many months later than ATIs latest gen.
From the start I speculated that they were aiming for about 50% more raw juice than cypress (given the die-size, money invested and big die strategy), and it seems that they will miss that mark slightly settling just outside the baby driver geared 5970. From what we hear and see the problems in TSMC may be the main culprit for it (both due to low yields and clock problems).
What we really need now is a good hard launch in prices that at least correspond to perf/$ ratios, and a strategy flexible enough to cover more market segments (via repairing or other means). I think that nvidia is up to the challenge and despite the many problems will deliver soundly.
Having said that, it would be also interesting to see if and with what ATI will try to respond to that.
Happy times ahead. :)
Indeed it'll be interesting to see what ATI has up their sleeves for a counter, given that GT300 is coming out nearly half a year later, I'm sure ATI has been cooking up something...
Can't wait until the "deep dive" into Fermi begins. :yepp:
If true, beating the 295 is good performance wise albeit late to the market.
This means that they will need the dual GT300 for the performance crown though. But one completely new architecture that at launch already can reach for the 5970 would be sweet. Drivers should come a long way after launch, then perhaps the gap is closer.
IF it indeed has 448 Cuda cores, and it should have been 512, then that would be a real shame. Hopefully the refresh should be better then. But I can imagine the next node is far away.
Now we just need rumours on power consumption, price. Let's hope it's not 300w!
Ps. they better get some midrange and low end cards out pretty quickly.
I never know when to take your posts serious. :p:
http://farm3.static.flickr.com/2738/...f3f3141e_o.jpg
After yesterday's pics of the 3 PEG connectors, nvidia now wisely decided to obscure it from view.
Same here Scaniris
I have had to RMA 3 GTX 280 cards :(
My first one lasted for ~7months before the screen started to get pink dots.
Second one (RMA replacement) was a DOA
Third one lasted for ~9months and then started to run extremely hot (loud fan too) and develop black flashing retangles in DirectX 10 games such as Clearsky, Crysis and (yes it's not a game) 3dmark Vantage.
I have a 285 on the way, but it is delayed because of the snow :(
Anyway back on topic I agree that stability is key and performance too, but stability more so.
The ATi HD 5970 is not out yet in the UK , perhaps when it is released here the drivers would be much better. The 5870 series stocks are very low here, but they have been out since December.
nVidia really do need to hard launch, and launch well as ATi's launch as been woeful here.
Charlie is semi accurate. The Tesla Fermi does indeed have 448 cores, however Quadro varients have a 448 3GB version coming EARLY Q2 2010 and a 512piper 6GB version coming sometime Q3 2010.
I would assume the same could be said for desktop cards (except perhaps the memory halved) and then perhaps a sneaky refresh of the 448 part @ 480pipes (rather like the GTX 260-216)
As for power consumption, it would not surprise me if it the test/beta cards have 2x 8pin, but the retail cards are 2x 6pin or 1x8pin and 1x6pin.
Remember that this product is still not fit for human consumption, nVidia have 6weeks? or so of tweakage to go.
John
That looks more like a camera blur, also it was only 2 connectors.
That much was very easy to see.
That's because both the 6 and 8 pin connectors are on the same cable.
It's one of those odd PSUs where instead of having a set of cables for each connector they have them on the same one, where it splits near the end.
It's basically in multi-rail PSUs where they have one cable for PCI-E for each rail, meaning each card gets its own rail.
That's pretty horrible luck with your 280s, my 280 I bought a few weeks after release is still going strong overclocked. I guess it's probably around 18-19 months old now. It's always run extremely cool and fan is running on auto mode, so up and down all over the place duty cycle.
All the people giving Fermi performance numbers or estimates, have the clocks been finalized anyway? If so, what are they?
think about it... why would nvidia finalize them already?
it would be stupid... they are definately still tweaking the design...
they wont settle for final clocks until very few weeks before the launch... and thats still months away...
i heard several rumors though that range from 500-750mhz and shader clocks of 1300-1600
i think the lower numbers were a1 and a2 which had clock problems, i think the final clocks will be about the same as the 280...
More like GTX285. It really depends on what they are able to do with their bins.
I still have doubts that they will have a fully enabled 700mhz part, unless it is a reviewer edition for the first quarter or so.
Edit- Short answer to your question, no clocks are not finalized though they have a good idea where they will land.
well 280 was 600, and 285 was 650 ref clocks...
i think they will be more around 600 than 650...
fully enabled? i thought even their server parts wont have all units enabled?
and they seem to be heat limited... a lot... unless they have perf issues and cant beat an overclocked 5870, which would surprise me a lot, i dont see why they would go at great lengths to clock their cards high and run into a lot of heat and power complications... not really worth it i think...
Am i the only one thinking right now that fermi will be a massive flop?
I mean 5xxx are selling like crazy, nvidia cards not much, why no official performance numbers from nvidia? It would only hurt AMD. The card is up and running depiste the crashes, so why no numbers? The only reason i can thing of, is that it would make the few people still waiting for fermi to stop waiting and go buy a ATI card.
since this is public forum it makes no difference did you've discussed with me before or not. Anyhow, nice to meet you Cyril ;)
congrats on your job! I'm glad we agree on FUDing subjects! Thanks for providing the links ;)Quote:
And before talking about me Fuding there, you should know 3 things:
- I'm working for one of the 4 world major IT manufacturers as ITAC, and
I HATE Fuding because it's opposite of what I'm doing in my daily business, absolutely not compatible with my ethic @ work and in the real life , I'm neither a sales guy nore a marketing guy. I'm there to share, have fun, learn and have objective and pragmatic discussions with others, I'm not there to fight or to fud
I see that you've spread the word of your problems around the Net and forum. Did you actually tried to RMA the card it self, 'cos you know that any piece of any hardware is prone to malfunctioning?!Quote:
- Did you already had a look into the ATI forum and into the watercooling forum? If not, a short refresh about latest news, you'll see people, me included, trying to find workarounds... with this HD5970
Here: http://www.xtremesystems.org/forums/...=241616&page=2
Here: http://www.xtremesystems.org/forums/...=239276&page=3
Here: http://www.xtremesystems.org/forums/...=240358&page=2
Here: http://forums.amd.com/game/messagevi...&enterthread=y
Another one on the EVGA forum.
:yepp:Quote:
- Last bnl: I thought I was editing my message, but instead of that, I was posting a new one. So it's not wanted to have a reply only with a smiley.
you really need to reconsider your emotional approach to forum debates. It's really nothing personal. I'm sure that you're great French guy! If you read my post agin, you'll see that I've only asked you for the links 'cos w/o them writing those kind of posts does look like FUD-ing, as I'm sure you're aware of that! ;)Quote:
So please...don't teach me about what Fud is please. And moderate yourself when you talk with people you don't know, you're on a forum yes but behind keyboard there are real human people.
Of course I don't know you, as well as you don't know me! That's exactly why I didn't write you in any personal manner! Again please read my post again and I'm sure you'll see that.Quote:
Keep some critical distance and try to stay objective with me and others. I don't know you so I hope it's some kind of misunderstanding because of this lonesone smiley.
I've never insinuated on your attentions. I've simple asked for the source links so that you'r post have appropriate backing, and not something that I'm sure you haven't intended to have - FUD. ;)Quote:
Hope everything is crystal clear now about my intentions. If you have a look on previous posts in this thread you'll see that I moderate my conclusions after the press news I gave about high temps. I'm not that kind of guy with kid's maturity and negative mindset.
;)
any forum member regardless of the origin is my fellow forumer ;)Quote:
PS: and remember: I'm french and english isn't my first language. It's not alway easy for me to be as precise and fine with English as with French. ;) But I take care of details as far as I can to avoid such situations where language may provoke misunderstandings. Why would I loose my time to make efforts speaking in english if it's for being a troll? It would be easier to do it on french forum right?
:welcome:
P.S
please consider RMA-ing your card :up:
I've had pretty bad luck with my GTX260. Two RMA's so far with EVGA. I am only looking forward to Fermi so that there's an ATI pricecut. Sad that Nvidia's problems are stopping me from buying an ATI card.
You sure it's not your mainboard? Imho you should test it in a friend's computer as well. The chances to get that much hardware doa are small, don't you think? Only had one piece of hardware arrive dead.
Exactly what I'm thinking. Would it be that hard to show a few numbers, even on early silicon? I mean, they could say "it's early hardware so it can improve" and keep ppl from buying HD-5k.
The sooner they reveal the performance the easier it will be for AMD to know exactly how much performance they need to aim for their refresh, that's why you don't usually want to reveal anything until 1-2 months before, that's not enough time for the competitor to make any rough changes but enough time to create some hype before launch.
And what if they showed numbers now but the figures was worse than expected?
why would the cards need to be finished to run unigine? no eng samples or test boards? it was running with the final clocks and shader count? c'mon man....
we all know how news is disseminated. most of the people waiting to make a decision are not hawking forums for the latest, they are just waiting for any numbers before they pull the trigger. i am certain that nvidia wants them to see the final results (for better or worse). beyond that, a certian competator of theirs has been waging quite an aggressive pr campaign with the express intent to sully nvidia's name (not that they can't do that themselves). i'm sure they want to release some performance data, but they don't need to give thier rivals or critics any more ammo. the nda comes off next week, after waiting this long a few days more won't kill us. ;)
only if they suck... if they are great it will make people camp on their cash :D
wasnt that the whole point of nvidias pr hype late last year at GTC?
dont buy atis 5xxx cards, fermi is coming soon and look how great it performs and what you can do with it!
now that they COULD create some REAL hype with actual numbers and performance... they only have a very small show at ces and dont talk about perf at all... hmmmmmmmmm why could that be? :D
im not saying fermi perf sucks, but its clearly not as amazing and way faster than atis cards as some people think, and as nvidia claimed... otherwise they would def show off some numbers or at least make some bold claims like "up to 50% higher perf" and "we will blow ati out of the sky" etc etc...
Good point, a 8800GTS (G92) and 7800GTX run fine on the board :(
It is a dead card as the fan spins, and I get the BIOS beep codes for "no VGA detected".
I have tested using both PCIe x16 2.0 slots and DVi connections.
It's technically only my second DOA as the other 2 cards became defective after time (1st was the notorious pink dots problem, the other developed black flashing black retangles and the Black Screen of Death issue).
Inbetween those 2 was a DOA which posted, but died after 10mins.
If you ask me BFG should have been stricter with the binning on their GT200 cards, I cannot comment for GT200b cards as this one was a DOA and would not even POST.
I guess these are the issues you get with huge dies and cores, meaning that the slightest issue can manifest iteself over time?
Fingers crossed Fermi does not go through this.....but I fear with the high power consumption it may do, as more power = more heat = damage.
I am losing faith in BFG and dare I say it nVidia after the issues I have had with GTX 280 and 285. If only the grass truly was greener on the ATi side....
John
my gtx285 is dying as well :/
its artifacting and the heatsink barely keeps the card running under stress (100c at 100% fan speed anyone?)
It's not doing everything I want either, so it's wait for Fermi or get a 5970 now. Since neither of those are possible atm, I'll wait for Fermi.
you shouldnt be having worse temps over time as long as you keep the dust out. the cards shouldnt be using up more heat, and the heatsink and fan are not changing, so in theory the thermal properties are identical from day one. you might want to reset the heatsink and take a vacuum to it.
Same thing came to my mind, after all that issues i would have started to think about, mobo, PSU, some kind of bad contact, etc...
Well, Ferni is not a gaming GPU, so gaming numbers will be OK but the real money is in GPGPU it is 2.5x faster than GT200 (in GTC last year with a1 sillicon maybe and low clocks), so asking for a 4x4 to show how fast it is in a highway is not how you would like to show it even though it will be used the most there.
PSU? maybe a shortcircuit somewhere? you need to give a lot more details to try to solve the issue or buy an ATI card and see if it works OK that could rule "other" components.
C’mon fermi is finished, it's done! What's a unfinished card? It's K10 still unfinished? AMD just rolled out a B3 revision with better thermals\clock ratio, it’s k10 still unfinished?
And what kind of excuse is that? Are you telling me that with clocks and shader count on that cards, the nvidia engineers that made Fermi, can't do the math, and come up with performance numbers for the clocks shaders they still expect to reach? Nvidia didn't gave numbers alrady because they didn’t want to, I just speculate about the why.
Didn't know that, it makes that demo look less dumb, and this discussion dumber.
Explain please, I like to see you try and fail.
Yes AMD is waiting for fermi benchmarks to figure out what to do next... and the same goes for Nvidia, they were waiting for R800 benchmarks to know what to do with Fermi.
Precisely.
Ok ok I give in. I will tell all the secrets.
Fermi will be faster than GTX280.
198fps in UT3
83fps in B:AA
42fps in Crysis
57fps in Far Cry2
It scores 14 250 marks
There, are you all happy now?
Fermi will be faster (much faster) than the GeForce 256.
why would you want me to fail?
first off the actual products are way too far into design to change. the only thing that could change are clocks and specs. fermi is not influenced r800. if ATi knows gf100 performance, launch date etc they can create a new marketing campaign, product line up, change price point today and have a competitive advantage. they already launched their card first which is a big advantage. the only thing left from a business perspective is to wait for nvidia.
You've got a point there, but this is no easy and fast business where you can pull something out of your sleeves in two months time, amIrite?
But in the end, all we can do is hope NVIDIA knows what they're doing and that they are not too arrogant to overstate Fermi.
Don't get me wrong. I have no doubt about your knowledge or your skill but if there's one thing I've learned about computers, than it's that you can't explain everything.
I currently got a problem myself. I built an i5 rig with an MSI P55-GD65, G.Skill Ripjaws DDR3 1600 CL7, an Enermax Liberty and a HD5850. It began with the RAM being DOA (the one piece of hardware I was talking about earlier), but everything went smooth for a few weeks after the new RAM arrived. Then, my HDD began having faulty sectors so I replaced it and after that, the system would unpredictable freeze, no matter about the load or what I was doing at that time. I'm still biting on that one, I already replaced the PSU and next up is the mainboard. You just can't be sure that a computer behaves rational, it's far too complex for that.
A quick question guys...
http://i44.photobucket.com/albums/f9...1011-26-40.png
Why are there 3 cables going to the card? Is it a 3x6pin configuration, or just a 8+6pin with the 8 in two cables?
8+6pin but some of the cable have double wires as it is used for another 4 pin cables , probably something do it with thermaltake PSU
http://img109.imageshack.us/img109/6...0204028957.jpg
Thats what I was thinking too (6+2 + 6 setup), thanks....:up:
Thanks FischOderAal
That is something for me to think about as my RAM was RMA'd last year due to it developing faults (replacement RAM worked flawlessly at much lower voltage).
GTX 285 OCX is dead in another system too with exactly the same symptoms (one long beep, three short beeps), ah well back to RMA it goes :(
I think the question everybody here should be asking is if the Fermi is going to be another "FX" or is it truly going to be faster than the :banana::banana::banana::banana::banana:in Fast 2000?!
http://media.techeblog.com/images/_funnygeek_1.jpg
John
When does the "Deep dive into Fermi" start?
In how many hours??
you and i both know there is a huge difference between a revision update and a product rollout.
what i am telling you is that (last i heard) the nvidia engineers haven't settled on shader counts or clock speeds so, no, they can't tell us it's theoretical performance. not that driver maturety has ANYTHING to do with gpu performance either and who knows where they are with driver development either. also, even though the nda comes off next week, we don't know what parts of the product line up they will be showing. or, even how much information for those products will be available. they could just show the card and tell us it's clock speed, vram size and type, and gpgpu performance. nvidia has been keeping the fine details close to it's chest probably because it's coming down the the wire. i'm hoping for a suprise next week with a full line up of geforce cards.... but that seems highly unlikely.
I disagree, that's AMD's style of marketing, NV likes to compare their products to their previous generation. It's much like Apple forever comparing themselves to MS and MS comparing themselves to their older product. It's bad marketing bringing good or bad attention to your rival, making as if they don't exist is the ultimate in arrogance which works.
I read in andtech that at min the GF100 will be 20% faster than the 5870, searching for the link now but the dam anandtech site is not opening... It was written in the CES section
i think this is what you are talking about:
from anandtech
"The demo also used NVIDIA's sterescopic 3D technology - 3D Vision. We're hearing that the rumors of a March release are accurate, but despite the delay Fermi is supposed to be very competitive (at least 20% faster than 5870?). The GeForce GTX 265 and 275 will stick around for the first half of the year as Fermi isn't expected to reach such low price/high volume at the start of its life."
Even AMD already calculated theoretical performance of fermi just based on it's specs, they already made a comparison chart between r800 and fermi. I'm not asking for theoretical performance, anyone can do that, i was asking for concrete numbers and if the nvidia engineers can't do the math by now, based on the results of the running cards, Nvidia you're done, you're engineers are morons.
Good point on the drivers though, like i said before, didn't know NDA lifts next week, just wondering about another demo of running cards without numbers. Let's just wait then.