You honestly believe that 5x90 cards will be slower then GTX285 SLI and 4890 CF? Do you know something the rest of us don't?
Printable View
No, I don't. 10%-20% is probably a more reasonable expectation.
I think you have a very optimistic view on HD5870 performance. 10-20% isn't enough to catch GTX 285 SLI.
http://img69.imageshack.us/img69/6812/performance.png
I think ATI is more worried about beating out the 295 with their single chip refresh, not SLI 285s
Why would AMD be concerned with an EOL part?
No I mean by the time this fabled refresh hits Fermi will be on the market and GTX 295 will be long gone. I'm waiting to hear what this refresh will be based on though given that we're stuck on 40nm for all of 2010. It could be a new architecture but it remains to be seen whether it'll be more efficient than Cypress on the same node.
well yea when the 5870 refresh hits the 295 will definitely be EOL on production. I wonder what happened to the ASUS Mars 295s; did one person buy like half of them or did they all melt and weren't able to RMA due to limited production?
if we're stuck on 40nm for 2010 this means that there isn't going to be a 5990 like previously hinted
It's enough to catch a 295 that everyone keeps mentioning in this thread. GTX285 SLI is 59x0 territory.
GTX285 is EOL too.
It would be on the same node with higher clocks and some arch tweaks - just like 4890.
According to this graph and your first quote, 20% is NOT enough to catch "GTX285 SLI and 4890 CF"
What I, and I think everyone else here understands, think you're trying to say is
5890 =? GTX295
5990? =? GTX285 SLI =? Fermi?
That's right - according to that graph.
Crank it up to 2560x1600 and/or add some more AA plus rerun it on the drivers that are out at Fermi's release and I think you'd see a different picture.
That's not what I'm trying to say. In fact I'm not sure I'm even interpreting this mess of words correctly.Quote:
You're just contradicting yourself. Up above you state the 5x90 will compete with gtx285 SLI, and there won't be a 5990 or a 5790 (that competes with gtx285 SLI), that leaves the 5890 which will NOT compete with it like you previously quoted.
You're just contradicting yourself.
Roughly.Quote:
What I, and I think everyone else here understands, think you're trying to say is
5890 =? GTX295
5990? =? GTX285 SLI =? Fermi?
5990?
If they already hit the PCI-e 2.0 TDP limit with the 5970 and had to lower both mem and core frequencies to stay below 300W, the 5990 it's very unlikely to happen (unless they use 32nm for the 5890? which is even more improbable :D).
Didn't you know about the unwritten rule that you have to predict the most favorable outcome for AMD and the worst possible outcome for Nvidia? :p: The 32nm 5990 will be with us right quick!
I agree that it is becoming increasingly hard for both ATI and Nvidia to keep their dual GPU solutions within 300W. That doesn't necessarily mean that they won't make a >300W 5990 if NV threatens their market position with Fermi.
32nm is too far off.
Instead of responding with sarcasm you could simply propose and argue for a more likely scenario.
32nm will not happen. 28 nm will be H2/2010. Probably Q4. One year away.
Quote:
IT LOOKS LIKE we were right about Fermi being too big, too hot, and too late, Nvidia just castrated it to 448SPs. Even at that, it is a 225 Watt part, slipping into the future.
Update: Nvidia has contacted us and declined to respond.
The main point is from an Nvidia PDF first found here. On page 6, there are some interesting specs, 448 stream processors (SPs), not 512, 1.40GHz, slower than the G200's 1.476GHz, and the big 6GB GDDR5 variant is delayed until 2H 2010. To be charitable, the last one isn't Nvidia's fault, it needs 64x32GDDR5 to make it work, and that isn't coming until 2H 2010 now.
http://www.semiaccurate.com/2009/12/...-fermi-448sps/
eee if this is true....it does not bode well.
:shocked:
I thought that site was going down? It spurs utter BS on a daily basis.
What would be the point of that? Didn't you just dismiss the hard numbers posted a little earlier and went on to predict great driver improvements for Cypress and crap drivers for Fermi? Seems you have it all sorted already.
In any case more bad news out of TSMC for Nvidia - not enough A3's are hitting the speeds needed for C2070, i.e 1400Mhz.
http://forum.beyond3d.com/showpost.p...postcount=2442
What would be the point of discussing it instead of sarcastically beating up strawmen? To explore the topic from a different point of view perhaps? Or do you expect me to play devil's advocate against my own perspective as well?
I dismissed those numbers and I gave my reason why. I can elucidate further if necessary. I'm not even sure why you brought up GTX285 SLI, everyone else was talking about GTX295. And both of those cards are EOL as well as 4890.
You want my optimistic perspective? The most (realistically) optimistic performance figures I have heard for Fermi are 40% faster then evergreen. Let's say that ATI doesn't refresh the 5xxx series either so they have to compete with what they have. If that happens performance would probably, IMO, look something like: 5850 < 5870 ~= GTX360 < GTX380 < 5970 < GTX390
That's basically a redux of last generation. As such it would probably mean some more market share erosion and narrow margins for Nvidia, just like last gen. That's my optimistic view. If you wanted me to say NV would be king of the universe and there would be global peace I'm sorry, I just don't think it's going to happen.
Actually it's spot on: http://www.nvidia.com/docs/IO/43395/...83-001_v01.pdf Unless you doubt the source...
Except, if Fermi is indeed 40% faster than Cypress (which I don't believe is possible) GTX 360 won't be equal to 5870 but will be a lot faster than it, and GTX 380 will be extremely close to 5970, so price points being the same comparatively to the last gen (as in difference between GTX 275/285 and HD4800's), Nvidia will be in an über condition.
It seems Fermi's GP GPU is the production of Nvidia's chasing their Larrabee nightmares.
I hope they didn't really get so much hallucinated to forget the gaming part.
If you look at 40% faster then evergreen with 9.11 drivers then yeah, GTX380 would be pretty close to 5970. But 9.12 drivers are already much better for CF. GTX380 would be closer to 5970 then 5870 but it would still be in between. As for GTX360 beating 5870 by a great margin, I disagree. Not if Fermi is only 40% faster then evergreen.
This a bit OT , but since there is so much discussion i thought i would add my speculation .If rumor of ATI switching to GF for GFX production is true and the new series will be delayed , they will release a card on same node 40nm just like they did with 3870 (55nm) to 4870 (55nm) , just a bigger chip after the process is butter smooth and yields are plentiful ,after all NV will be making a 500mm2 chip for a while and tcms will have some experience. So add say another 480 Sp's ( 800 would be nice :D) some 6 Gbps GDDR and you got nice new card .
On topic .The new cards from Nvidia are nice but i keep wondering , how many cards can they make with one chip ? 3 Tops , if they are to keep 260 (if still under same name,lol ) positioned against 5770 they are going to lose a lot of mainstream market , maybe a bit less performance but overall its a better card .
everything aside, you guys checking the Nvidia Facebook page? 32xAA :S Another good way to kill FPS without making anything look better!
NVIDIA GeForce GF100 Fermi Video Card Facts
GF100 is the codename for the first GeForce GPU based on the Fermi architecture!
The GF100 board is 10.5-inches long -- the same length as GeForce GTX 200 Series graphics cards!
GF100 packs in over 3B (billion!) transistors
The GF100 supports full hardware decode on the GPU for 3D Blu-Ray
GF100 graphics cards will provide hardware support for GPU overvoltaging for extreme overclocking!
GF100 supports a brand new 32x anti-aliasing mode for ultra high-quality gaming!
225w is for Tesla, with 3/6GB of memory. It should be lower for GeForce. Plus, with your line of thinking no one should overclock a GTX 295 or HD5970.
How much less? I doubt they'll release with less than one 1GB, so we're looking at 1.5GB on a single chip card that will probably still chuck out 200+watts when it's really loaded.
Maybe it won't be that bad, it's possible that number came before the switch to 448 shaders (which they might get back now 40nm yields are better). But I personally wouldn't overclock a dual gpu card unless I was benchmarking. Too many hot parts in too small a area for my liking.
Unless it can manage that magic 40% faster than a 5870, nVidia will be in trouble as they can't make a dual GPU with two of those chips.
Yeah, I don't doubt that GTX 380 will have a TDP of higher than 200. But 5870's TDP is very close to 200 too, and you can bet 5890 will be above 200W when it's released.
Still don't get how a GTX 395 is going to be made though
What strawmen? You're the one making sweeping assumptions. I just chose not to counter them with my own unfounded predictions. You never explained why you think there are big driver improvements to be had on Cypress even though the architecture barely changed.
That's because I didn't. Mario did and you did as well.Quote:
I'm not even sure why you brought up GTX285 SLI, everyone else was talking about GTX295.
Except the GTX280 wasn't 40% faster than the HD4870 and was quite a bit slower than the 4870X2.Quote:
If that happens performance would probably, IMO, look something like: 5850 < 5870 ~= GTX360 < GTX380 < 5970 < GTX390
That's basically a redux of last generation.
I don't understand your reasoning. 32xAA as a feature is pretty cool. Even if it's not possible on Crysis. Older games will fly. 32xAA is still not realistic AA. Anything and everything can be improved in my opinion. I welcome new Antialiasing modes. :)
DX11 is nice, but it has a reasonably big effect on fps in Dirt2. Same story I guess, it just means more eyecandy, worse performance.
I'm sure Fermi will be worth the wait.
Ok people, Quadro NEWS. There will be 2 SKU's on launch. (Early Q2 2010) and then a mega expensive donate your organs and house SKU in Q3 2010, it would not surprise me if we see CUDA cores "cut" in groups of 32, for example 448, 480 and 512 although this is speculation, the only confirmation is that there will be 2 Quadro FX Series GPU's in Q2 2010 and "the big daddy" in Q3 2010.
If (and yes this is a big if) the same is reflected at consumer level my guess would be.
360 = 448 pipes 1.5GB of RAM
380 GTX = 480 pipes 3GB of RAM
380 ULTRA = 512pipes and 6GB of RAM and an expensive electricity bil and large carbon footprint.
Pure speculation at this stage.. all eyes on the 7th of January I guess eh?
John
Rather would like to use 8x SSAA with old games than 32x CSAA i dont think that 32x will be for MSAA, CSAA is totally lame compared to MSAA.... :down:
The 6GB's or ram costs a lot and i mean a lot, if this is a consumer card nvidia may have to sell it at a loss also i do believe 6 GB's of video ram for games is more than overkill its mega overkill. Hell i prefer 6GB's of ram on a i9 than a fermi.
XbitLab claims that the GF100 will have 512 stream processors, I wonder from where they're getting the info.
Quote:
The flagship Nvidia Fermi “GF100” graphics processor will feature 512 stream processing engines (which are organized as 16 streaming multi-processors with 32 cores in each) that support a type of multi-threading technology to maximize utilization of cores and deliver extreme computing power. Each stream processor has a fully pipelined integer arithmetic logic unit (ALU) and floating point unit (FPU). The top-of-the-range chip contains 3 billion of transistors, features 384-bit memory GDDR5 memory controller with ECC and features rather unprecedented (for GPUs) 768KB unified level-two cache as well as rather complex cache hierarchy in general. Naturally, the Fermi family is compatible with DirectX 11, OpenGL 3.x and OpenCL 1.x application programming interfaces (APIs). The new chips will be made using 40nm process technology at TSMC.
wth consumer card needs 6 gb i doubt that it will have 6. 2 gb max imho
Well, I really don't understand what this whole ram speculation numbers thing going on right now as I thought we got that information awhile ago. If I remember correctly, 1.5GB was going to the standard amount of ram on the top consumer card with a possibility of a 3GB part in the future kinda how the ATI 5000 series is with 1GB standard and a 2GB possibility part.
1.5 - 2GB for vram is the only capacity that makes sense IMO for non-quadro cards. Personally I see 1.8GB and 1.5GB respectively for GTX 380 and 360 would be the best compromise (cost, size, a good balance between what's actually usable now/a bit of headroom for future games) if it's technically feasible, which I can't be bothered to check. I hope some numbers shows up soon, these speculations and rumors are really boring me to death now lol.
IMO Geforce will have 1536MB "128*12" and for 3072MB "256*12".
If you want 2GB ram you need to use 256*8 so that means 4 slots are empty. Companies generally try and fill as many slots as they can since its cheaper than putting higher capacity memory chips.
32nm is a strawman. I said nothing of the sort. As for my assumptions, I wouldn't say they are unfounded. They are founded on the little bits of information that we have at the moment. Certainly nothing strong like "I have an NDA", but it's still fun to speculate. You don't have to participate if you don't want to.
As for driver improvements, it's just a suspicion of mine. That a chip with the same number of shaders, higher clocks, and higher bandwidth would consistently score lower then a 4870x2 (much less 4890CF) even though it doesn't have to interface through CF doesn't seem right. You keep mentioning a barely changed architecture - then where does the performance discrepancy come from?
A quick review shows that Marios did mention it, but nobody picked up on it. I certainly didn't mention GTX285 SLI until after you did. Double the speed of GTX285 =/= GTX285 SLI. All the other discussion revolved around GTX295.Quote:
That's because I didn't. Mario did and you did as well.
Like I said, that's my optimistic (for NV) view. If you don't like it I could present my doom and gloom view.Quote:
Except the GTX280 wasn't 40% faster than the HD4870 and was quite a bit slower than the 4870X2.
Personally I think you guys are the ones being more pessimistic (or sandbagging). I think Fermi will be significantly faster then GTX295. If it isn't it'd be a disaster, imo.
4870x2 can set up 2 triangles per clock. 5870 can set up 1 triangle per clock. you can see in synthetics that geometry shader is the same.
http://img706.imageshack.us/img706/2298/gsrv870.png
http://img10.imageshack.us/img10/2298/gsrv870.png
Does that account for the majority of the performance difference?
i hope we never need 6g's for any games ever lol
except he might have poor internet
I know i'm stuck with 1 megabit for at least the next 7 years so I won't be able to download any of the new games without it taking a week (bandwidth caps to) and they won't sell games that large on 500 dvds cause its not "cool"
Never said you did. I was referring to your comments on driver improvements.
The 5870 does not have more bandwidth than the 4870X2. It's 153GB/s vs 230GB/s on the latter.Quote:
That a chip with the same number of shaders, higher clocks, and higher bandwidth would consistently score lower then a 4870x2 (much less 4890CF) even though it doesn't have to interface through CF doesn't seem right. You keep mentioning a barely changed architecture - then where does the performance discrepancy come from?
Your optimistic view for Fermi is that its launch drivers would be crap and Cypress drivers would have improved tangibly by then? Hmmm, interesting take on optimism :)Quote:
Like I said, that's my optimistic (for NV) view. If you don't like it I could present my doom and gloom view.
I hope we do, textures are still pretty low resolution. But memory isn't the only limiting factor. Art is very expensive to create and it's not going to get any cheaper in the future.
The extra memory might be for more than just gaming. As nvidia (and ATi for that matter) try to take on extra responsibility in what they can render within the OS, that extra memory might come in handy. Future internet browsers are supposed to be GPU accelerated and we already have GPU accelerated adobe flash and shiny windows features that are bound to take up resources within the dedicated GPU memory and freeing up system dedicated memory. That doesn't account for 6g's of GPU memory, but it is a start.
Interview with Luciano Alibrandi, the Director of PR at NVIDIA
- Some people say maybe we shall see some working samples in CES
- I cannot answer that one. I wish we can actually... probably do some first claims of what actually the architecture does but nothing confirmed.
Nvidia CES 2010, Las Vegas
At CES 2010, NVIDIA® will be showcasing the latest NVIDIA® GeForce® GPUs powering the hottest PC games and CUDA™ apps...
LOL. Which are those "interesting latest NVIDIA® GeForce® GPUs"? Did I miss something interesting?
I don't see anything about driver improvements here. All I see is a thinly veiled accusation of being a fanboy and a snide remark about a 32nm refresh.
So it appears that there are some advantages with going for a multi-gpu highend. Doubled triangle setup and higher bandwidth. Are we expecting a GTX390 at the same time as the launch of the rest of the lineup?Quote:
The 5870 does not have more bandwidth than the 4870X2. It's 153GB/s vs 230GB/s on the latter.
Well if you think that drivers for a brand new arch are going to be more stable and more optimized then ones for an arch 3 generations old then I don't think I can really help you understand. And no, that's not part of my optimistic view. It's a separate issue.Quote:
Your optimistic view for Fermi is that its launch drivers would be crap and Cypress drivers would have improved tangibly by then? Hmmm, interesting take on optimism :)
I must have missed where that comment was directed at you. It wasn't a reply to one of your posts was it?
Yes, as has always been the case. I wouldn't bet on any sort of multi-GPU Fermi at this point. It also remains to be seen if one will even be necessary (or feasible).Quote:
So it appears that there are some advantages with going for a multi-gpu highend. Doubled triangle setup and higher bandwidth. Are we expecting a GTX390 at the same time as the launch of the rest of the lineup?
We're talking in circles here - one minute you're saying Cypress will benefit from driver improvements, now you're saying Cypress drivers are stable (which is what I said in the first place).Quote:
Well if you think that drivers for a brand new arch are going to be more stable and more optimized then ones for an arch 3 generations old then I don't think I can really help you understand. And no, that's not part of my optimistic view. It's a separate issue.
In terms of your optimism for Fermi I believe you're referring to it needing to beat the GTX295 or else it'll be a failure. That's not an optimistic view, it's simply setting the low watermark against which Fermi will be judged on release. Personally, I think GTX 295 is far too low of a benchmark.
http://img710.imageshack.us/img710/9360/perfr.png
It's pretty clear in context whose ideas you were mocking.
I wouldn't bank on a multi-gpu fermi until a refresh. But Nvidia has claimed there would be a multi-gpu version, without stating any sort of timeframe.Quote:
Yes, as has always been the case. I wouldn't bet on any sort of multi-GPU Fermi at this point. It also remains to be seen if one will even be necessary (or feasible).
I'm saying that there will be more improvement possible in fermi drivers then evergreen drivers. I'm not saying that there will be no improvement in evergreen drivers at all. Software development is a process. There is rarely a case where a piece of software is bug free or as optimized as it could be. Also the hardware did change a little between generations, even if it's not as obvious as things like shader count. Latencies change, the memory controller and scheduler are tweaked, etc. Between now and when fermi is released, and beyond, the drivers will continue being developed to take advantage of those arch tweaks, fix bugs, etc. It would be unusual for for development to just stop and make no improvements/fixes from this point forward.Quote:
We're talking in circles here - one minute you're saying Cypress will benefit from driver improvements, now you're saying Cypress drivers are stable (which is what I said in the first place).
I'm not the one that introduced the idea of being barely faster then a 295 is enough. I always assumed it would be significantly faster.Quote:
In terms of your optimism for Fermi I believe you're referring to it needing to beat the GTX295 or else it'll be a failure. That's not an optimistic view, it's simply setting the low watermark against which Fermi will be judged on release. Personally, I think GTX 295 is far too low of a benchmark.
My optimistic view is 40% faster then evergreen with perfect hardware/driver execution. Pessimistic would be a little faster then 295 with some sort of problem that needs a respin, board revision, or several driver versions to fix (it's a complex new arch, don't say it's not possible). Realistically I'd say 30% faster then 5870 with some minor growing pains.
Since you like computerbase so much I figured I'd share these graphs from the same review with everybody:
http://img30.imageshack.us/img30/519...0x12008xaa.png
http://img30.imageshack.us/img30/371...0x16004xaa.png
I would suspect that the 6GB Quadro will remain a 6GB Quadro and will not filter down to consumer level, however (and this is a big however). Do you know that the FX5800 Ultra 4GB is infact known to consumers as the ASUS M.A.R.S 4GB GTX 295 ;) So who knows maybe some board partner might make a consumer incarnation of the 6GB behemoth which is coming in Q3 2010 in Quadro form.
Now, from what I have read, RAM is useful not only for texture memory and all the fancy video effects and post processing effects, but also GPGPU wizardry too. Apparently stuff like OpenCL, CUDA, DirectCompute Physx etc all love the extra RAM and GPU processing power. So perhaps 3GB cards at the high end are not a "bad" thing.
Who would complain playing a DirectX11 title with high resolution textures (2GB's worth) and then a shedload of computational work which also uses up some of that VRAM for fast execution. (say a further 256MB's worth). Extra VRAM is good when you have it, and certainly a whole world better than hitching, pausing and stuttering if you do not have the VRAM.
Personally I cannot see nVidia releasing an incarnation of the Fermi with LESS than 1GB of VRAM (with the GTX 360 or greater nomenclature), if yeilds are bad and nVidia get bin happy and make the Fermi GPU go down throughout the range to the lowly numbers of GTS 350, 340 then we could see some 768MB models appearing.
IMHO 1.5GB is going to be the base model VRAM with the high end models having 3GB.
This isn't confirmed yet specifications for Quadro and Teslar almost make it safe to assume this will be the desired memory configuration for Fermi.
And yes, I think it is fair to say that we all want the expensive electricity bil, large carbon footprint FPS pushing Monster Fermi ;)
John
That sounds about right. But keep in mind 40% is Nvidia's number and that puts it right on top of the HD5970 which would pretty much render that part irrelevant (unless you have a hankering for Eyefinity).Quote:
Originally Posted by Solus Corvus
http://bbs.chiphell.com/viewthread.php?tid=64351
Post #66 cfcnc:
顺路发个消息,TSMC的圣诞礼物- Fermi A3已经顺利出样
Conveniently to issue this news, TSMC to give Christmas present -Fermi A3 already smoothly to come out shape(?)
Post #69 tomsmith in reply:
有个不太好的消息,符合2070 频率目标的比例还不理想,做成2050 的稍微多一点
To have this not so good news, in accordance with the 2070 frequency goal proportion still not ideal, to turn into 2050 a little bit too many.
Quick analysis: A3 out in time for christmas, plenty chips of C2050 ability(~1200Ghz shaders), not enough of C2070 quality available(~1400Mhz shaders).
more rumours partially good
That's their number for full target clocks and shaders. Seeing as how they seem to be having some sort of difficulty, I wouldn't be surprised if the numbers are somewhat lower.
As for the 5970, whether or not you believe evergreen drivers will get better, crossfire will and already has gotten better in the newer drivers. Also, as I have mentioned, I don't think AMD will wait till the very end of the generation to release a refresh like last time. They have been executing well lately and I doubt they will sit around and lose the initiative.
If Fermi is 40% faster then evergreen and nothing changes on the AMD side then GTX380 would be well placed in the market. But if that is going to be the case remains to be seen. There are some potential issues, as I have explained.
Nvidia Is Happy With Performance of GeForce GF100 “Fermi” Graphics Card.
http://www.xbitlabs.com/news/video/d...hics_Card.html
Interesting read:)
Hmmm so any takers on this.Quote:
The flagship Fermi graphics processor will feature 512 stream processing engines (which are organized as 16 streaming multi-processors with 32 cores in each) that support a type of multithreading technology to maximize utilization of cores
360 = 448
380 GTX = 480
380 Ultra (limited quantity, very expensive, loud, hot, and a large carbon footprint and a high electricity bill. yet Flagship Fermi) = 512
I really do hope the 448 pipes are a "Tesla only" bin. And that we will see 360 with 480 and 380 GTX with 512!!
Who knows if nVidia do what they did last round, we COULD see the GTX 360 with 448 pipes and the GTX 380 with 512 pipes and then later on nVidia do a rebrand of the 360 say 360-480 with 480 pipes ;)
Who knows?!? :confused:
But what we do know is that the Fermi is sounding like it is going to be the FreakingFastFPSPushingMonsterMi
John
Quote:
the company is not only working hard on the new chip itself, but is also developing “perfect” drivers in an attempt to provide ultimate experience for the end-user.
Fap fap
I'm also thinking this will be the case that GTX 360 will have 448. Also if you concider the price NVIDIA needs a card that should target $399 to not be far too expensier than the HD 5870 while still beating it noticably if topend card GTX 380 would cost $599 that's roughly the same or say 10% from a 5970 in performance. With 480 pipes nvidia couldn't target it at $399, more like ~$499 and be too close to GTX 380 and this will hurt its sales and it really needs a card that can at least come close to 58xx series cards in price as not that many people buys $400+ cards. Better to save 480 pipes for a future GTX 375 or whatever that will counter AMDs higher clocked 5890 card.
i'm sure alot of people would ebay there 5970 IF nv comes out with a card around 599 that had the same or similar performance.
as for nv making sure the drivers are "just right"(thats yet to be seen ofc)....thats good the other company did not with there last batch of cards.
that seems to be the biggest issue most consumers of the 5xxx complain about,and first hand i can tell you no matter how great and fast the hardware is,its means nothing if you have to workaround many driver issues.
Not meaning to be controversial here, but the Forceware 195 drivers are not exactly what I would call "perfect".
1) No PCI-E Gen 2.0 support on X38 and X48 chipsets :(
2) Stutters in 3dmark Vantage
I have reported these issues to nVidia and ASUS, but still there is no fix other than downgrading my drivers. If I want PCI-E Gen 2.0 on my X38 I have to use Forceware 186.18WHQL.
Hopefully nVidia will resolve this issue in the next release.
RPGWiZaRD
I can see exactly what you mean, from a competative pricing point of view 448 pipes on the 360 makes sense, I am guessing if any of the hype is to believed Fermi Pipes are much more efficient and powerful than G200 pipes, so we may see GTX295 performance on the GTX360?
John
My predictions if we say GTX 295 is in avg 10~20% faster than HD 5870 the GTX 360 is maybe 10% slower than GTX 295 and like 10% faster than HD 5870 and then I think HD 5890 could be made like 10~15% faster than HD 5870 why this GTX with 480 pipes would come handy later. However that Fermi is actually performing this well is still a bit difficult to believe and I wouldn't wanna become disappointed either but this is really my most optimistic view about Fermi because nvidia really needs to meet these targets at minimum or else they'll get a hard time and I'm sure nvidia knows this. :)
@ john
the new nv drivers did borke more then fix for me but nothing major.
i think it was a ion 210/220 thing to put them out ....what else lol
As far as the drivers go, nvidia made huge improvements starting from 195.62 so it seems like a good start and that they're serious about driver improvements now. All drivers since 19x.xx until 195.62 has been a huge disappointment, 195.62 is the first driver I see as an improvement since 186.xx/187.xx series. 195.62 is more similar to 186 or 187 series than 19x.xx. I'm guessing they jumped back to the older drivers and started working from there instead. NVIDIA has different driver teams working on different versions simlarly so that's one reason to the back and forth jumping number revisions and why you can get such a different results depending on driver used.
That guy Luciano Alibrandi... wasn't he the one who said a time ago, that GTS250 was a "brand new" product? Or even more back in time, right around the FX series debut, there was again high-powered PR trashing by his department for the ill-fated DX9 performance... meh!
Luciano Alibrandi's interview Here
Nvidia delays Fermi to March 2010; AMD to launch new GPU in January-Februar ....
http://www.digitimes.com/news/a20091228PD207.html
Hmm I thought that CES was going to see a live demonstration of a real Fermi, some benchmarks and then the real availability of the GPU would be in March?
Or has this changed and Fermi will NOT be at CES?
If so... :(
Edit on a more serious note, what has happened to Shintai?
I used to remember he had a lot to offer in hardware speculation threads (pre-GTX 280 and HD4800 launch he used to post in a lot of threads)
Yeah, where did Shintai go?
ffs- due to the pain in the arse of getting an 58x0 card i decided to wait and see what the nvidia offering was like but now it looks like possible more waiting!
The launch in the end of Q1 earliest isn't exactly news though, is it? Not like it wasn't said a couple months ago.
Unless I'm misunderstanding that paragraph, Fermi high-end part is coming Q2 2010.Quote:
Nvidia plans to launch a 40nm GDDR5 memory-based Fermi-GF100 GPU in March, and will launch a GF104 version in the second quarter to target the high-end market with its GeForce GTX295/285/275/260, the sources pointed out.
Yea they usually talk about 4 different segments:
Enthusiast
Highend
Performance
Mainstream
100 is the enthusiast chip and 104 is highend that can be expected to perform whereabouts GTX275 ~ HD5870 range.
New GPU from ATI... sounds like an enhanced 5870 version with DX11.1 :p:
Probably just cedar and redwood, the low-end evergreen cards.
Yep if 5890 does get launched it would be around the time fermi officially launches IMO.
I'm more interested in GF 102 chip, should it exist in the first place. My last nVidia cards, 8800 GT & 8800 GTS 512 MB, were G92s, and they were really satisfying for me, and quite a legendary performer (that seems going to live forever from the looks of it, LOL). :D
Yes i was hoping for 5890, but it only mentions the new entry level ones. So no Fermi till March, pity i would have waited another few weeks into Q2 depending on availability is prolly too long.
Is digitimes a legitimate source?
I would buy a new nVidia card, if i found a justification for it. There's nothing worth buying a new card for right now, unless you are building a new machine. I mean come on, Dirt2? Even AVP won't have any DX11-noticeable effects i bet. New games are mostly DX9 based and still look real nice.
2009 Honda Civic costs around $16K
2009 Audi A4 costs around $35K
They both have 4 wheels, they both can take you from point A to point B quite comfortably. In most US highways the speed limit is 55 - 65 mph/hr which Honda Civic can easily achieve.
According to your logic there is no justification for anybody to purchase an A4. Guess what! It's one of the best-selling luxury cars. You know you want one of these cards. If I give one to you for free, you wouldn't say "No, I don't need it. Most games still use DX9" You can lie to yourself all you want but, don't expect people to buy your argument.