Untill he have got the 590. 8 days after have been released the GTX580.
Printable View
Untill he have got the 590. 8 days after have been released the GTX580.
Has this OBR ever posted anything that wasn't faked?
lol, not since a long time.
and to think the same OBR's minion that keep bringing his faked stuff to this respected forum ........ i can't stop thinking how shameless this particular guy for keep feeding us thrash all over the place
Well I am still relishing the release of the GTX 590, not only because it looks like it is going to be a very powerful graphics card, but also in hope that nVidia are going to add even more improvements to SLi :)
Even us on GTX295's may get improvements from SLi enhancements.
With regards to the OBR discussions, I never knew the guy, but did see his posts from time to time... and then he vanished. I do not know why he was banned.
Perhaps it is best to leave him be and get along with his life rather than accuse him of faking this that and the other.
John
Looking at the thread....seeing lots that isn't allowed here in the News Section. Need I remind people that first offense = permanent removal from the News Section plus a brief forum ban.
From a cursory glance, a few people will have been banned by later today when I have more time to look at it all.
If you're about to post about someone else (or their behavior), rather than post about the topic, you probably shouldn't post.
wow look at all the B-hammers....:eek:
Yea. People seem to get really riled up when new flagship cards are near release. It's just sad that it led to several veteran members getting banned.
Bring on both the 6990 and the GTX 590! It's been so long since we've had both dual chip solutions come out near the same time. It's extremely exciting! :)
Oh geez I guessed it would come to this following this thread. I also too find it quite amusing to see people get all riled up about new flagship gpu's on the horizon.
Back on topic. My birthday is soon and i hope they release it soon as I need something new to play with.
I wish people wouldn't get so upset. Competition is good...
I know I asked this before, but do you guys think the power envelope for this card will be lower than the gtx295 (680w PSU claims nvidia)? I wouldn't mind picking one up, but my rig can't fit anything larger than 600w PSU.
I think both solutions use at least dual 8-pin power connectors. If that's the case, they're both outside of PCI-e spec and will require a very beefy PSU to run. I'm guessing 750-watts minimum. Both cards will be slightly downclocked to conserve a little power/heat output, but when it's all said and done, they're both dual flagship cores from both companies, neither of which are very energy efficient.
Don't get it, why is it taking amd and NVIDIA so long to strap two chips on a card with a big heatsink?
Gf110 and rv970 have been out for how many months now?
I guess its about strategy, they don't want to hurt their sales of highend single gpu vgas...
News from Uncle Fuad :rofl:
Quote:
Cebit 2011: Dual-chip Nvidia is real
We had a chance to see Geforce GTX 590 at Cebit. It is real, it runs and it’s coming soon. It is quiet and packs enough power to run Crysis 2 in 3D at very nice frame rates.
The card has two PCIe 2x8 power connectors and it is rather hot, something that doesn’t come as a surprise. The card looks slightly different than the EVGA card we saw at CES 2011, but the EVGA card had a custom cooler that will be different than the reference one.
The launch has not been set, sources confirm, but there is a chance for a March, Q1 2011 launch. It looks like Nvidia wants to see Radeon HD 6990 performance and then decide on the final clocks and specs.
Once this card launches, it will be time for us to start sniffing about possible Q4 2011 28nm part launch dates.
http://www.fudzilla.com/graphics/ite...-590-in-action
Fuad I think you smoke some weeed before you go to Cebit 2011 Because of That You saw GTX 590 There and also you saw the next generation nvidia kepler :D:D:DQuote:
We had a chance to see Geforce GTX 590 at Cebit
Fuad You make me laugh :rofl::ROTF:
Too few leaks for my liking, just this bs fake stuff. I just don't see GTX 590 reaching 6990 due to power issues, and I don't see a reason to release an inferior dual gpu card for Nvidia. They should just stick with single gpu or sli.
Nvidia better up the ram or its going to look bad. In addition, this is a pure king of the hill competition thing and they need to keep the momentum. they know cayman (antiles) is a very fast and stable chip that is not quite the power hog. i personally want to try out both 590 and 6990 but since they will be limited on overclocking its not really worth it when you can just grab a 4 slot board and put in single gpu cards.
On that not after testing all flagship cards from ati since 3xxx series, I would not mind purchasing this card if/when if comes out in the market. bench it and then resell it :D
I wonder if these dual gpu cards make any sense for people already sitting on a decent setup. I mean 6 months after they release we should see 28nm products no ? Was wondering if its finally time to change the 295 but seems it will hold out a while longer not really playing the latest stuff or going to play BF3 (would love to but dont think ill have the time). Prolly only makes sense for someone building something from the ground up.
There's not a lot of demanding games out there atm so there's not much reason to buy a powerful card unless using triple monitors or 30" monitor. And it seems like there won't be as long as developers keep pushing multi platform titles for the current console generation.
Thanks guys. I have an SG07 and putting a 700w/800w is nearly impossible, although MIGHT be interesting! I do have an 850w corsiar, hmm......
850 corsair should be enough.
I would not be surprised if we see an ASUS MARRS style edition with 3GB per GPU and higher clocks and a custom cooling solution and the requirement for a minimum of a 1000W PSU.
However the normal GTX 590 should run on a 800W PSU (decent quality one). Hopefully run on a 900W Gold PSU (Enermax) ;)
John
Winpower :p:
John
:shocked: Dual GF110!! Thats insane.. not impossible. But that just wild. Gonna be one super hot card. Sure would be nice in my system with EK water block to match. :rofl:
nVidia could be preparing something rather exciting for the 590 launch.
These are from the release notes for nVidia's BETA Release 270 CUDA 4.0 Candidate driver for developers and select reviewers. It appears that nVidia are making CUDA architecture more parallel, I wonder how this will effect the world of SLi and DirectX11 thread lists?Quote:
* Unified Virtual Addressing
* GPUDirect v2.0 support for Peer-to-Peer Communication
* Share GPUs across multiple threads
* Use all GPUs in the system concurrently from a single host thread
Also nVidia recently posted this as an announcement on their official forums.
Does make you wonder if a rather power hungry multi GPU is coming rather soon doesn't it?Quote:
Furmark is an application designed to stress the GPU by maximizing power draw well beyond any real world application or game. In some cases, this could lead to slowdown of the graphics card due to hitting over-temperature or over-current protection mechanisms. These protection mechanisms are designed to ensure the safe operation of the graphics card. Using Furmark or other applications to disable these protection mechanisms can result in permanent damage to the graphics card and void the manufacturer's warranty.
John
For CUDA, well it's a developpement kit essentially for developper and Quadro/tesla system. And i don't see what CUDA have to do with DX11 and SLI performance in games ( outside PhysX ). this update is not directly dedicated to "games developpement"; but more and likely essential for professional computing.
For the "Furmark things: this type of things have been added till the release of the GTX580&570. specially after the " gpu-z" tricks.
Most likely, it's the thermal limit of a card they can design within a certain set of specs (length, cooler-size, power-system complexity, etc.)... Not all power-related problems are based purely on power-draw :)
Also, there is a limit to the amount of current you can push through soldering-tin and copper :p:
Best Regards :toast:
Very true, but correct me if I am wrong, but doesn't nVidia use CUDA for PhysX?
(as in PhysX runs over CUDA)?
I have heard that PhysX 3.0 SDK (currently they are on 2.8.4) will thread across multiple GPU's, so surely this means it requires CUDA 4? based driver:confused:
Or is this entirely independent and I have got confused over how nVidia implement PhysX relating to their CUDA. Either way surely it is a sign of things to come? :shrug: (even if we are just talking in the folding and transcoding/encoding department).
John
Geforce GTX 590 launched March 22
http://img849.imageshack.us/img849/6...0432064222.pngQuote:
In about two weeks introduced Nvidia's retaliation to the Radeon 6990th Geforce GTX 590 boasts 1024 CUDA cores, 3 GB of GDDR5 memory and soaring 375 W TDP
I don't think the clock will be in the 6xx range.. the card will be too slow ( 580 are at 772 and 570 at 732...imagine the result ). I believe they base this info on the gpu-z screenshots we have seen.
what is going on with all the new beffy cards having only 2x 8pin power
Yes, PhysX is somewhat CUDA related, although not absolute mandatory, ie. the original AGEIA accelerator. :)
And PhysX 3.0 is supposed to thread across multiple CPU cores, not GPU's, that's what you mean? http://physxinfo.com/news/3414/physx...lti-threading/ No point in dividing it across multiple GPUs when something like a 9600 is enough to run it, at least in current implementations of it.
Thank you for clearing that up for me DarthShader :up:
Although I am sure I did hear somewhere that PhysX 3.0 would be kinder in multiGPU situations. At the moment on the single PCB GTX 295 all PhysX processing is done on GPU B :(
Rendering is done on both GPU A+B. So in games which use PhysX GPU B is working a lot harder than GPU A. If the work could be split across multiple GPU's, then PhysX would have less of an impact.
But hey, nothing wrong with having some SSE and multi-threading love :p:
John
So much speculation. After plugging some numbers into Excel, and assuming the leaked specs are reasonably accurate, even at a 600MHz clock it will be at least 50% faster than a GTX 580 because of the number of cores. The GTX 580 stock clock is 772MHz. It will also be faster than 2 stock 560Ti's in SLI. In short this will be a monster card but it will be slower than the 6990 overclock card. The 590 would need a 650+ clock to beat the 6990. If somehow they managed to get the clocks up to 700MHz, the GTX 590 will destroy everything and bring about Armageddon.
For GTX 590 to be faster than 6990 it would have to be faster than 570 sli. Now can Nvidia pull of a 570 sli in one card? They can of course use full GF110 chips, but they need to drop the voltage and clocks to a 375W level. And I don't see how they could pull it off considering a single GTX 570 has TDP of 220 W.
Personally I think performance wise Nvidia will admit defeat, but the dual GPU card could still be a good offer if priced accordingly. It would give a good option for Nvidia Surround. Also the reference cooler might actually be something usable.
Well according to TPU's power consumption numbers for a single 570 it's very realistic for a dual 570 to operate at a 375w level.
http://www.techpowerup.com/reviews/H...D_6970/27.html
2x GF110 with 512 SPs @ 650-670mhz may be able to beat HD6990 at default clocks (830mhz)
700mhz seems high but not impossible with handpicked chips and agressive power control, c'mon nvidia, surprise us... :D
I hope ATI and NVIDIA don't get caught up in a perf race... both cards will be monstrous regardless of clocks. Who cares if a is 10% faster than b if both are MORE than fast enough. I hope the don't release super hot n noisy cards which make sli and xfire look like a better alternative lol.
Looking back into the past my guess is ATI will be faster but hotter and noisier.
I dont think you will see PhysX across multiple GPU's being used in games as it already induces too much latency to be used real-time with just a single GPU. Thats why all PhysX effects on "GPU" are only effects and not interactive. Interactive physics stay on the CPU.
you are optimisit, ( this is not a troll ) 2x 580 @ 650mhz will never close them ..
580 are @ 772mhz ............ 570 SLI have 732mhz base speed
And we know all how the core clock or shader speed have a big impact on Nvidia ALU.
I don't want say AMD or Nvidia will be faster ( waiting the test and for what i care lol ), just the 650-670mhz look to don't be enough compared to a 6950 cfx with 30mhz more and full 6970SP or a 880mhz version with full 6970 cfx core speed and SP ..
This is not for enter a " fight of who will win or loose " ( for what i care ) but just for comment your numbers.
I don't think ... underclock 2x GTX 580 and you will understand the problem. but yes you are right, let's wait " review " .
^^ 3 DVI's? No HDMI? Thats kinda odd because I would have expected it to have HDMI but whatever, I prefer DVI.
I think Nvidia is in alot of trouble if they really want that performance crown.
They will need 2 downclocked GTX 570's to compete in the same powerconsumption as 6990:
http://www.webpagescreenshot.info/im...01171656pm.png
BUT who knows what lies or tricks nvidia is ready to use ;)
i think that will be their (our) benefit
if you have to pay for 2 downclocked chips, hopefully the price you pay is for their current performance (so like 600$ instead of 800+), then all we gotta do is watercool and overclock and get 30% more perf out of it and catch up to 580sli OC. most people paying for such cards either dont care about a higher price, or know how to overclock.
remember the 5970, they advertised non stop the ability to OC it past specs. and thats how its going to be in the future if people still try to squeeze in as much as possible into 300W, or they built it for more and just have a profile for 300W. the perf crown back in the day was just a simple, whos the strongest, but now it seems to be who has the more efficient design at 300w exactly. and they are trying to be smarter about packing in more perf while maintaining that pcie compliance, and testers need to be aware of that too so they can give a better idea of real world use efficiency and perf, instead of looking at power consumption by just one benchmark that is no where near real life use.
IF the GTX 590 is within 10% of the Radeon 6990, and costs £100 less and is a lot quieter and a lot cooler then nVidia will win this round...... in my opinion.
However I can see the GTX 590 being more expensive, hotter and potentially louder too :(
= A DRAW!
John
lolQuote:
What the......
That is Sweclockers.com test and those numbers are wattage during a normal Vantage run, wich represent the real powerdraw better than Furmark. Sweclockers 6990 review
How many runs were done?
As many know, Vantage peaks in several different areas; many of which are less than a second long and may not be picked up by a standard power meter.
In addition, CPU usage is a HUGE factor and can increase / decrease number accordingly and in a non-linear fashion.
Looking at that chart, it seems like the calculations for some cards are VERY high while others are low. It could be that the monitor is picking up the areas where CPU + GPU peaks converge in some situations and registering situations of non-convergence in others.
yeah I agree that chart looks really fishy.....
Kristers Bensin can you please link us that REVIEW?????
http://images.hardwarecanucks.com/im.../HD6990-84.jpg
Its already linked and here is the furmark part:
http://www.webpagescreenshot.info/im...01193550pm.png
As you can see Furmark doesnt show a realworld perspective, either because the card gets downclocked by amd powertune or they just show the "peak" wattage consumption.
Here is also a review from Nordichardware wich shows a 472W draw for the whole system, these results of course differs depending on the equipment used, different examples of gpu and cpu. Click here.
http://www.webpagescreenshot.info/im...01194704pm.png
The Nordic hardware chart doesn't include other comparative solutions.
Whatever, as i was saying, nvidia will have a hard time battling the 6990 within the same powerconsumption. Especialy when u look at how close the single GTX 580 is to 6990. Even the SLI 570 is above 6990 in terms of powerconsumtion.
It will be interesting to see nvidia's binned 580 cores compeeting against amd's binned 6970 cores.
whats your opinion if someone did a 1 hour loop of some game or graphical benchmark and looked at the total power consumption of say killawatt meter? do you think that a spiky consumption would result in horribly skewed total watt-hour consumption? while the display of a meter like that only updates every second or so, does the watt-hour part depend on those low update frequencies?
you take power readings to heart a little more than most sites do, so your opinion is quite valued.
Let' break this down. First of all with benchmark selection.
In an optimal situation and in order to ensure the CPU's power consumption plays as small an influence as possible, a non-gaming benchmark should be chosen.
Granted, this may not show "real" power consumption but that's not really the point. What it does is show COMPARATIVE numbers which is ultimately the goal of any chart.
I have tested literally every possible application in order to determine the best possible combination of high GPU load and low CPU load. Furmark and a number of others tend to put a high load on the CPU so those were out. I ended up being left with two options that put a constant load on the GPU: 3DMark's Batch Size test and Vantage's Perlin Noise.
Unfortunately, Perlin Noise was trashed since Vantage reloads the benchmark every time which eliminates the "constant" load that is needed. NVIDIA's application detection has started throttling Perlin Noise as well so that was definately thrown out.
The Kill A Watt is a great tool but ultimately very dangerous to use for comparative power consumption testing when using games, etc. The reason for this is exactly what you outlined: it's polling rate is ~0.75 seconds which means it can completely MISS a peak if the system load wasn't at a near-constant level. Hence why using a game, or 3DMark's standard tests is a huge mistake. Even the UPM meter I use has a ~0.25s and I still don't think that is quick enough to accurately judge the peak power consumption of a game.
Now to the heart of your questions:
No. Not at all. In order to get a proper idea of watt hours, a large sample size needs to be taken. Say, over the course of 12 hours. In those 12 hours a little over 43,000 data points will have been logged and even if 1000 were absolute peaks, the result would carry the weight of averages.Quote:
you think that a spiky consumption would result in horribly skewed total watt-hour consumption? while the display of a meter like that only updates every second or so, does the watt-hour part depend on those low update frequencies?
Hope that makes sense. :up:
^thanks a bunch, i completely forgot about the other 3dmark tests
do you think those make great use of the ram and core? or is the rams power consumption just so minor today compared to core and vrms
Sure power consumption is going to be high, it is/going to be high from both parties at this level of performance. The power consumption is no worse than an SLI or crossfire configuration delivering comparable performance levels instead of two cards its all rolled onto a single pcb.
480's and 580's still sell despite having higher price tags and higher power consumption than the competition but they where also faster than the competing products.
The Batch Size test uses less than 5% of an eight core CPU at most as long as long as the detail settings are increased enough to ensure 30-40FPS. Higher than that and CPU cycles begin to be eaten up.
IMO, the memory really is minimal in its impact. What is a concern is HDD power though. Some tests and games tend to access the hard drive much more frequently and certain high capacity drives can consume 15W or more during certain read/write/seek operations. When you're talking about certain lower-end cards, that 15W can have a massive impact. Hence why any test chosen should not stress the HDD during rendering.
i know EXACTLY what you mean with the HDD, my total consumption is 350w max from the wall with cpu in prime and gpu in furmark, idling at 68-72w, if i throw in my WD black, its an extra 10 watts.
if only someone built a test that uses up a gpu fairly (less sharders more textures), no cpu usage, and fills in the memory without having to access the drive, we could be getting somewhere.
I am already looking forward to the Hardware Canuck's GTX 590 review, it was refreshing to see 8X FSAA tests in their HD6990 review, I hope to see these sorts of tests done more often by reviewers and also not with the same games.
John
From Expreview -
NVIDIA GeForce GTX 590 To Roll Out On March 22nd
AMD Radeon HD 6990 graphics card has unveiled yesterday,what about NVIDIA GeForce GTX 590?We have just got the news that NVIDIA has confirmed the launch date of the graphics card.
According to NVIDIA,the release day of GeForce GTX 590 falls on March 22nd.GeForce GTX 590 packs dual-GF110 core,features 1024 CUDA Core,3GB GDDR5 memory,dual-8pin external power connectors and has TDP of 375W.
Coincidently,March 22nd is also the release day of Crysis 2.
Source
so it is confirmed then its two GF110s from TPU:
NVIDIA GeForce GTX 590 Launch Date is March 22
The dust seems to have settled down, after AMD's launch of the Radeon HD 6990, extending the red-team's performance lead previously held precariously by the Radeon HD 5970, to the GeForce GTX 580. It looks like NVIDIA will challenge the performance leadership with GeForce GTX 590, a dual-GPU graphics card that uses two GF110 GPUs (the ones on GTX 570 and GTX 580), for an SLI-on-a-stick solution. Rumors of NVIDIA working on this card became concrete as early as in November 2010, when NVIDIA's reference board became public for the first time.
Latest reports suggest that NVIDIA has chosen March 22 as the launch day of GeForce GTX 590. Incidentally, that is also the launch date of EA/Crytek's much-hyped, initially DirectX 9 action/shooter game, Crysis 2. GeForce GTX 590 uses two GF110, though the shader configuration and clock speeds are not known. Since NVIDIA is chasing the top-spot, you can expect the most optimal configuration for the GF110s. A total of 3 GB (1536 MB per GPU system) on board, and NVIDIA's workhorse PCI-E bridge, nForce 200 will be the traffic cop and radio station between the two GPUs. The card will be able to do 3DVision Surround (NVIDIA's multi-display single head technology comparable to ATI Eyefinity) on its own, without needing a second card.
Anyone want to guess the price of the card? I would think around $800-$900 does this seem about right?
Only 3GB GDDR5... hmm I wonder how that plays out at 2560x1600 against the 6990
depending on performance id guess msrp would be $799 ..probably being a little faster then 6990 and costing the premium to do so.. At least thats how it normally works, however this time it MAY be different with the main reason being power of course. Since amd has always went smaller efficient gpus vs a power hungry beast this MAY actually pay off and continue the performance crown..Since for nvidia to be at 375w they will more then likely have to low clocks and i DOUBT they will have a dual bios like amd did which is VERY intelligent to pull out a little extra performance and break the "limit" but yet still be compatible with it..
MSRP will be $799.99
Nvidia has a tendency towards delivering high OC'd as well as std clocked reference cards.
How often is that switch really going to be used and really how hard is it to simply OC any video card with one of the mainstream utilities or flashing. Dual bios is cool and neat but at the end of the day doesn't really do anything that can't already be done very easily from within the OS.
I just wish NVIDIA increased the VRAM to 6GB, but yeah that is overkill, because 1.5GB per core just isn't cutting it anymore, but I'm glad to hear that it will use the GF110 cores.
Question would you be able to use this GTX595 + GTX580 for a SLI setup? Dang I just pulled the trigger on the 6990 and getting it tomorrow thinking this was far away from being available.. I really wanted to go with Nvidia on my next setup...
A swedish shop already lists GTX 590 price http://translate.google.com/translat...x-590-prislapp
Comparing current swedish prices of GTX 580 usually ranges from like 4000~5000 SEK vs ~$500 USD on newegg this GTX 590 will probably land at $799 USD price at launch (if lucky maybe down to $749 at cheap USD retailers).
From that source
:eek: WTF?Quote:
It can be compared against competing Radeon HD 6990 with a price tag of around 6000 dollars.
Ofc they will be overclocked version, even if i'm not sure they will be out from Evga the same days of the 595 release (maybe ), it's not like pull a 900mhz 560TI.
But this will be the case too for the AMD6990, AIB have allready annonced get there OC version, without the switch bios.. so at least we will see 880mhz version and 900mhz+ .
This is maybe where the real fight will be. 590 AIB OC retail version vs 6990 AIB OC retail version. they will surely cost an arm. But i think AIB will get some fun there.
Leaked specs?
Two GTX 580 chips
Clockspeed 60xMHz/12xxMHz/3400MHz
http://www.sweclockers.com/nyhet/136...as-den-22-mars
yea and those oc'd cards will be overpriced as well. they wont be the norm. just like any oc cards.
and i have a 6950 right now and i can tell you that the dual bios is AMAZING..between flashing to 6970 and trying to adjust voltages and having bad flashes etc, i flip the switch boot flip it again flash bios and its done..simple and easy...flashing a bios within a minute..love it..
as far as the easy oc, its not the point of being able to do it etc..its the fact that Amd's card can claim higher results because it can do it in stock fashion. Of course both will overclock from there to achieve even higher performance easily within windows.
Bottom line though review sites will be able to test the 590 at 6xxmhz/3400mem(if above is true) and the 6990 at 880/5000 since the cards are stock..surely they will have the 830 as well but the 880 will be there too. Also its a LOT easier to flip a switch for people who don't want to OC. They know it works or else amd wouldn't have put it there so some people feel safer and more confident in using it...of course all of us here can easily clock chips etc but XS isn't the only people that buy parts just saying.