Naa this is the clock reported overclocked using the 3th party software Afterburner, the card seems to be only 607mhz... I hope the card can oc to more of 840mhz anyway.
Printable View
Lanek's right: 840Mhz is the MSI afterburners limit. I don't think you'll be seeing much over 700Mhz with the stock cooling.
Of course I mean desktop cards. The 590 is a desktop cards after all. Me saying non-full support implies there is a card with fulldp support. Otherwise I wouldn't have even mentioned it. Geez. :(
Good spot, the translated source says "38% overclocking potential". They should set the limit at 9001mhz. Imagine the marketing possibilites. :)
Or do it like asus does when they claim 50% OC, as in 50% better than the default volt oc, say 100 Mhz +50% = 150 MHz OC. You just gotta love marketing. :ROTF:
The way you wrote it implied that Fermi cards in general don’t have full DP support.
Even without full DP support, DP analogy for GTX590 will be higher than GTX295 but I don’t think he cares about DP with Cuda for Folding ;)
http://www.anandtech.com/show/2977/n...th-the-wait-/6
http://images.anandtech.com/graphs/n...5215/22218.png
Since all the reviews you are talking about were done with 11.1 and 11.2 drivers that were out BEFORE the 6990, you are so right.
But 11.4 BETA are turning the table aroung. Look at the slides. Here, 3 of the most demanding games out there, and 6990 and 580 SLi are head-to-head.
Blah blah blah about the limited VRAM on the 580 Nvidia. But Nvidia are selling those POS card 500$, not me. And the 6990+6970 cost also 1000$. It's a real massacre. 580 SLI is totally obliterated.
And this is with BETA 11.4 drivers! So why talk about ''95% of the benchmarks'' with drivers that were out before even the 6990 was out! So relevant! 11.4 are the first drivers to support the 6990. Nah. Pesky little details. So easy to compare with old drivers not supporting the card.
The WEAK point of the 590 is the limited amount of VRAM, not the clock speeds. Look at the slides! Look what not enough VRAM is giving you! The results with the 590 will be the same, or lower then 580 SLI, since it's with lower clock speeds. And the limiting amount of VRAM will the same. 1.5Gb only. Results will be the same with the 590 against 6990.
Please look at those slides. Where do you see 580 SLI ''dominating'' the 6990? I don't see that. Point me to it. Please, analyze those slides, and don't elude the question by posting irrelevant videos. And not some old benchmarks done with 11.1 or 11.2. drivers... There are plenty of those stupid reviews done with drivers that were out before the 6990 was even out on the market. Too easy. But commenting those slides with 11.4 is tougher. And shouting ''troll fanboy FACT FAIL AMD DRIVERS, blah blah blah'' is so much easier and convenient. :)
The 6990 on OC BIOS is clearly head-to-head with 580 SLI 1.5GB. Please explain the ''domination''. I see 6990+6970 dominating the 580 SLI at the same price point, but not the 6990 OC BIOS versus 580 SLI. And the 590 will also by limited with only 1.5Gb. Same thing. Look at the slide. You can call me troll or fanboy all you want, but the topic is the 590, and that card will be underclock compared to 580 SLI, and also have only 1.5GB like those. So those slides are totally relevant to the topic.
People really beleive the 590 will BEAT 580 SLI? No. Or else Nvidia would sell the 590 1200$. They won't antagonize their 580 SLi market.
http://www.hardware.fr/medias/photos...IMG0031277.gif
http://www.hardware.fr/medias/photos...IMG0031319.gif
http://www.hardware.fr/medias/photos...IMG0031284.gif
http://www.hardware.fr/medias/photos...IMG0031323.gif
http://www.hardware.fr/medias/photos...IMG0031286.gif
http://www.hardware.fr/medias/photos...IMG0031325.gif
And why not post your famous videos of 3X580 1.5GB, but against 3X6970 2GB this time with 11.4 drivers? Where are they? No. Since it would be too logical to do it. :) And since the 3X580 1.5Gb are VRAM limited, 3X6970 would be better. No. All happy to do 2X6990 against 3X580 1.5gb. But 3X 6970 should also be threre to compare, and with 11.4 (not 11.1) to be relevant. :rolleyes:
Or why no videos of 6990+6970 against 580 SLi 1.5Gb?
No. Too easy. 2X6990 against 3X 580, while even my old mother knows that Quad-Fire on 2 cards doesn't scale well, just like Quad-SLI (2X590) will totally loose against 3X 6970 2Gb. SAME THING.
Why not do a more logical video of 2X6990 against 2X590 next week? That,s what I want to see.
I have often expressed my concerns over the lack of VRAM on high end GPU's Not only for frame buffer and >1920*1200 resolution, but also as there are games which love the extra VRAM (especially when you start adding high resolution texture packs to them).
Unless nVidia either significantly increase the efficiency of memory allocation on their GPU's and/or special editions of the GTX590 with 3GB per GPU are released your analysis will be absolutely correct.
Oh and IF special edition 590's with 3GB per GPU are released they will be very, very, very expensive
John
i never said ati drivers suck...
i had just as many issues with ati drivers as i had with nvidia drivers...
yepp, a lot of people already made up their mind and are trying to justify their decision :D
thats true... the position of the power plugs on the 590 are really bad for ln2 and not great for water either...
i dont think that itll make people chose a 6990 over a 590... people who want a 590 will still go for it i think...
I really don't understand what you're asking. 3D rendering is an extremely compute intensive process and the vast majority of that burden falls to the graphics card. What do you mean by the GPU is the bottleneck? The reason a cheap dinky CPU is good enough is because 3D rendering is a mostly GPU intensive task. Anybody who follows this scene should know that.
Try rendering a game on your CPU and see how well your 95w CPU does at that task.
That's not what I'm talking about. There are plenty of tasks for a CPU to do. It is running the rest of the computer's programs as well. A single core CPU doesn't cut it anymore. That's why we have 4 and 6 core CPUs. Also, GPU's will always be better at their given tasks. A GPU with hundreds of cores is far better at chucking out pixels very quickly than a few cores from the CPU. However, a CPU is much better at taking linear instructions.
What I'm saying is that a very CPU intensive game such as BC2 can be maxed out on a high-powered dual core or a mediocre quad core, and the framerate will remain above 60 FPS if the video cards are powerful enough. It calculates a lot of stuff. Gun shots/hit detection, environmental interactions, physics calculations, plotting character movement and AI interactions, etc. On top of all that, it runs any sorts of programs the OS is running in the back ground as well, including the sound driver, video driver, hardware controllers, etc. The CPU is the brain of the computer. Somehow, that dinky CPU can do a ton of stuff while the GPU is completely devoted to rendering in games.
For reference, I bet we could have games that uses twice as much CPU resources as BC2, and if you get a SB i5/i7 CPU (around $230) a $110 motherboard, and $50 worth of memory, it will chew it up and spit it right back in your face. You do the same thing for games (create a game that uses twice as much rendering power) and the best video cards out there will choke and sputter, even the uber powerful $1000 video card setups. Hell, we still can't max out Crysis at 60 FPS without spending $1000+ on video hardware. Even a couple 6990's can't keep Crysis above 60 FPS the entire time. (Start @ 17:49 for Crysis) That game was released 4 years ago. Why is it still a benchmark for which hardware companies are trying to overcome?
GPUs are very complex pieces of hardware. No doubt. But a GPU cannot function on its own with out a CPU to drive it. A GPU does not render that scene by itself unless it's programmed to do so. It still has to have the coordinates for rendering and other instructions given by the CPU. That's why a benchmark like Heaven requires about 5-20% of CPU power to run flawlessly, even though it has virtually no AI or interactions requiring CPU power.
Sorry but I'm really not getting your point here. You started off saying that GPUs are "bottlenecking CPUs". That's completely false. A cheap CPU is enough to run most games but we need expensive GPU hardware. That's cause games don't need much CPU power but a lot of GPU speed. That's not the fault of GPU manufacturers or GPU hardware, it's just a fact based on how the software is written.
A bottleneck occurs when a component is unable to do more work because it's held up by another component. That's not the case here. CPUs don't do more work cause the software isn't giving them more work to do.
So you're saying that the first statement is false, and the second statement is true? A bottleneck happens when a piece of hardware isn't fast enough to keep up with the rest of the computer. Very rarely is a CPU the bottleneck. Most of the time the GPU is the bottleneck in a game. I guess what I'm trying to say is I don't see how the first half and the second half of that statement don't go hand in hand. When a GPU bottlenecks a CPU, it means the GPU isn't fast enough to keep up. It also means that you can use lower power CPU's to run games in most instances.
You just reinforced my point about GPUs not being able to keep up with CPUs. Thanks. :)
I'm not sure what the performance will be on a 590 but from what I recall you can bottleneck a cpu with software audio requiring enough hardware threads and cpu cycles that an overclock may be needed. And the thing is most of the newer games coming out today use software audio. This isn't implying that those games audio bottleneck the cpu. I'm just throwing out a hypothetical situation when now you want the full CPU's attention to run a dual gpu solution.
Too true. My fiance's system with an e8400 idles at around 5-7% CPU usage because of that stupid audiodg file. Everything sound wise is done via software now if you're running Vista or 7. This makes a lot of sound cards useless except for post-processing effects.
I've never found the extra CPU overhead to be a problem though.
Yep.
Remember this all started with you complaining about the size of GPU coolers and their power consumption. I'm simply saying that GPUs have higher power consumption and bigger coolers cause they do far more of the work. Your complaints ignore the realities of the software being run on modern systems.
If you want to argue that high end CPUs are "bottlenecked" in games cause they hardly have any work to do during 3D rendering I can get behind that :) I would say CPUs are underutilized by game engines. There is a lot more work CPUs could be doing that have no dependency at all on the GPU - better sound, AI, animation, physics etc. The fact that your six core CPUs are idling is no fault of the graphics card.
I'm just going to say that with a single 6950 I saw some nice improvements with a few games with a conservative overclock on my i7 860 which is still a top notch cpu.
I want this
There are more games out now and down the pipeline that are requiring the latest SB cpus. Battlefield is the perfect example of this. at 19xx res and up you almost double your fps going from a quad 775 to a a 2500k/2600k overclocked and that is with just single player. With multi player it CPU comes more into play. MMO's and RTS are also known CPU hogs.
When I changed my E8500 for my current x3370 I did notice an increase in the frame rate but I don't think that a much more powerful CPU will be as useful.
Going from 30fps to 60 fps is a nice performance increase, but going from 60 to 120 is absolutely useless (as long as you don't use 3D, that's it)
http://www.guru3d.com/article/core-i...600k-review/21 plus this: http://www.overclock.net/intel-gener...4-955-3-a.html
Sorry to say, but that statement is false. The framerate is virtually identical. At that level, you're GPU limited. Even with an HD 6990 or GTX 590, you're still going to have some limitations. There's no doubt that an i7 2600k is faster than a Phenom II X4 @ 4Ghz or a similar Core 2 Quad, but when you're arguing that a faster CPU will benefit the gaming experience, in this case, I just don't see how that can be possible. I mean, even when the CPU is the limitation at 1024x768, BC2 is still chucking out 90+ FPS on a Phenom II quad core. As soon as you move up one more level to 1280x1024 or 1600x900, the CPU is no longer an issue. Would anyone here seriously consider that a bad thing?
Those tests were done with a GTX 580, too. No way that thing could be considered a bottleneck. :p: