This may be true but two seperate cards nearly always outperform the single card/dual gpu solution too.
:)
Printable View
This may be true but two seperate cards nearly always outperform the single card/dual gpu solution too.
:)
I think the question is more, do they want to realise a reference card with 375W ? Its not cause there's 2x 8pin on the card ( like all dual card: 590-6970-5970 etc ), they want to aim at this 375W.. This dont mean too they need to underclock it for stay way below this: the 680 is far under the power consumption of a GTX580... 2x at full speed should not be a problem at all..
Dont forget the 680 have this turbo, ideal for dual cards... ( the cores speed can go up and down in function of the TDP limit, they can play with it for save a maximum of performance whatever is the base clock )
http://en.expreview.com/2012/04/11/m...ked/22383.htmlQuote:
After the release of GeForce GTX 680, what will the new product follow? We got from some source that dual-GPU GTX 690 will debut in May.
GTX 690 will be based on two GK104 cores, features dual-8pin external power ports, three DVI ports, a mini DisplayPort, supports PCI-E 3.0, Adaptive Vertical Synchronization, GPU Boost and TXAA technology.
Additionally in order to secure stable operation of the system, it’s necessary to use PSU of at least 650W. However, the other specifications are awaited.
Am I the only one still wondering what's going on with the 7990? I would have expected AMD to release it by now to slow down the gtx 680's thunder - isn't it odd that we haven't heard anything?
Disagree. There is a safety margin per GPU (eg: 20w) when they come up with the 195w number.
Adding two GPUs and subtracting a nominal power saving of 20w for combining the PCB's we get 2*195-20 = 370w. 20w per GPU margin has not changed.
If you drop the clocks, the per GPU margin goes up.
In terms of PCB power you are way closer to pushing the theoritical limit. Even though you do have seperate sets of capacitors for each gpu, you still are feeding all the power from the same entry points. Why would the per gpu safety margin even be of concern in the first place?
TO prevent AMD from starting crap like this:
http://www.geeks3d.com/public/jegx/2...be_grilled.jpg
and this:
http://www.legitreviews.com/images/r.../egg_start.jpg
While funny, I was specifically talking about the gtx 680 lol. We all know how hot the 480 ran
As for the Catleap monitors, here's a 680 pushing 133Hz @ 2560x1440....
http://cdn.overclock.net/6/65/65b4f2a9_17.jpeg
Is that yours? So it's a 2B build?
:D
I wish, I still haven't gotten anyone to buy my 3x 24" monitors :( Not that 7950s will run past 85Hz because of software and past 100Hz because of hardware, lol.
Yeah, he has a 2B. He could only do 126Hz before, but its apparently warmer so he thinks he can go higher.
I want to buy a Catleap but I don't like the idea of taking a chance on not getting a 2B without a quality warranty/return policy. If the new batches are also quality overclocking models I might splurge myself
I'm happy with my 60hz QH270. For just over $300 you can't go wrong. Its just such a nice display. That said I'm not a huge multi-player fps gamer. IDK, I'm happy with my purchase.
I'll seriously think about it then. I heard the next batch is supposed to come in about a month
Any idea about response times, input lag etc on these? While I certainly wouldn't need or even want 2560x1440 @ 120Hz+ for gaming (the maintenance cost of assuring you get sufficient FPS is just not worth it to me), I wonder how it does at 1920x1080 though, any1 with such monitors tested it, I guess it should work fine on most Catleap monitors running 1920x1080@120Hz? Also does it really allow true 120Hz so the frames doesn't just get dropped?
Well put RPG. I don't have the graphics card to push 120 frames at that resolution with modern games, but certainly input lag would be an issue of interest regardless. I've also heard stories of gamma and brightness gradients because these are A-,A panels that effectively failed the specifications required for what Dell and Apple use. I don't really think stuck pixels will be as much of an issue as correct color representation etc
Any real information for 780 ? I keep opening this thread every time someone posts, in hope for actual info and I get frustrated when I see there is nothing new...
Well, question is what is "real".
Latest info I have is
4GB GDDR5
512bit bus, 64 rops
3072 Cuda Cores
128 TMUs
"Cuda Next" abilities
2+ TeraFLOPs of DP performance
And of course much lower clocks than GK104. Maybe 800 MHz or so.
they should place a holder for fats to run.
The even worse part is that you can't do the 120Hz @ 2560x1440 in SLI or CF :rofl:
I don't think anyone has really done 1080P on the 2B models because I'm pretty sure they lack a scaler.
It really does 120Hz if you can push 120FPS :) After that you get interesting lines and artifacts because the panel's limitation is being reached.
That seems to be the general consensus, approximately just all of gk104 doubled. Any idea on when it'll come out? I really want to get a gk104 chip as a midrange card.
To be fair, the monitors that do have a scalar cost $1k. You get what you pay for in the end lol. Honestly my concerns lie purely with making sure I don't get a dud as these are literally supbar panels. If I can get one that has zero dead pixels, no gamma problems and low backlight bleed I'd be very content for just $350. 2ms and 100 hz would certainly be really nice, but as I understand it those who really want to overclock can just buy the additional pcb+cables anyways (though I'm not sure why only the 2B came with them in the first place).
I guess it all depends on AMD. I still don't get why they've been sitting on their hands now that NV is getting enough supply. Yes they reduced their prices but they should have launched the 7990 at the same time to steal the top end sales. Until they give Nvidia a real threat on their sales they've got no reason to release the gtx 780
The market for the dualcards is so small, it doesn't matter except for prestige. I'm also pretty sure that the 780 is far from being ready. The tape out of GK110 was only last month afaik. Add 4-6 month, rather 6 and you end up at September at the earliest. And then most of the chips will go to Tesla cards. Desktop most likely will come in second. I wouldn't count on good supply before 2013.
Continuing on the monitor discussion, it seems the Crossovers are the way to go if you don't care about 100 hz. They have a far better stand, the same panel and apparently have slightly better input lag (though I'm not sure how people came to that conclusion). Still it seems that stuck pixels are the primary issue of concern.
Can you imagine gaming on 3 of those with tri sli gtx 780? Assuming they have enough horse power to chew through 7680*1440 with aa that would be a site to behold
A guy (Ailuros) who knew pretty much everything about GK104 as he has his own sources:
http://www.forum-3dcenter.org/vbulle...d.php?t=523399
64 rops 6b transistors and 6+8 pin!!!
:-)
I do hope the king Kelper card comes with 4GB of 512 bit memory sounds heavenly to me.
lol, you are right.. And wy "3" SMX / GPC ? dont 2 or 4 will not be more appropriate ? Strange to use a number of SMX like that. And why they use "6" GPC ? Look like a typo for me .
For 3072, something like 4 SMX for 4GPC or 2 SMX x 8 GPC will be correct. But they could too reduce the numbers of CC / SMX . ( instead of use 192SP ). Something we have allready seen with Fermi. ( 32SP /SM on GF100, instead of 48SP /SM for the GF104 ).
Lets not forget the LDST+FU+caches etc cost a lot of place in an SMX ( there's 32 of them / SM in addition of the 192CC) ( and this without count the uncores units ( polymorph, warp, dispatch unit, TMU etc ).
.
Good question, I'll ask :)
Okay, I just did some calculations and 3072 CCs at 850 MHz doesn't give the listed 5.875 TFLOPS, but 3456 CCs do. So I'm tempted to say the "3072" number is the typo. Also, the GTexels/s number works out with the given 8 TMUs/SMX and 18 SMXs.
Also Ailuros in the other thread says that GK110 CC ≠ GK10x CC, so that might account for why there are so many CCs reported in GK110.
Laugh all you want, I know he is very well informed. I wouldn't say that lightly.
So boxleitnerb asked about the apparent CC discrepancy on the 3DCenter thread and got this response:
Google Translate gives "It fits anyway, but the right was ever there. TBA." Can someone please translate it better?Quote:
Originally Posted by Enforcers
Does anyone know if the Monitors still have to be the same sync polarity for 2D Surround?
Sorry, on my cell phone so I can't upload the pic here.
But NVIDIA has posted a teaser of an upcoming card:
http://www.hardwarecanucks.com/news/...graphics-card/
Not really much to see but at least it's something. :)
Maybe....both?
You're now talking out of your personal wish or from your sources point of view? :D
I definitely am more interesting in a GTX 670 and want to especially see if Nvidia plans 2 or 1 GTX 670 card(s), GTX 670 (non Ti) would be right up my alley as a good purchase candidate.
Well we haven't heard anything about the 7990 so maybe it's the 670 Ti
So are we thinking the beast card could be single-GPU, or is it looking like dual?
If ill need make a choice, i will say 660-670 ... they really need something as the 660 for the middle range, and as they have now completely stop 580 production, it make sense.
The logic will say 660-670 are more profitable for sales. But on a certain point of view i believe it can depend on the level of production for their 28nm parts.
- Low production: 670 is not a good idea ( GK104 ), 690 will not need a lot of wafers and no need of big availability. So launch it now can be a good solution in this case.
- If the production goes better: 660+670 are needed in the lineup. ( its not the 500$ cards who sell the most )
http://vr-zone.com/articles/nvidia-s...5th/15626.htmlQuote:
sources have informed SweClockers that this SKU is scheduled for an imminent launch and to expect it in the week commencing April 30th, implying it will be launched by May 5th at the latest and April 30th at the earliest. Either way, it also means it would manage to beat out the competition while we await AMD's HD 7990 to arrive. The GTX 690 comprises dual GK104 GPUs, featuring 3072 CUDA cores and 4GB of GDDR5 memory
Any ideas on price? Guesses? us$899?
:up:
I'm just going to leave this here...
http://www.brightsideofnews.com/news...on-may-14.aspx
If that's true, out of 111 pages that would be one of the very few snippets of actual information pertaining to the gtx 780 lol
Yessir, that's why I made sure I posted it in this thread and nowhere else :)
I wanted to make sure I posted something of value when I got it, especially when you consider the fact that I was originally one of the first few people propping up this thread with my own BS. :)
Realistically speaking, nobody should be surprised at all. Anyone who's been following the situation could tell they were saving the fully-fledged part for GTC especially after I hounded them about CUDA performance and they were mum.
Expect a new CUDA Compute and fully-fledged Kepler :)
Yields already suck. There is no way they will double the die size. Not without a good expectation of solving the process problems.
At this point, I'm not entirely sure. I don't recall them EVER making a dual-GPU Tesla part. That's the only reason why I'd shy away from such a view...
http://en.wikipedia.org/wiki/Nvidia_Tesla
And yes, I think we would be back to 588^m (to be exact)... which we all know NVIDIA has a lot of experience of dealing with :P
But at the same time, i'm not entirely sold that they'd necessarily have the exact same chip. I feel like there would be tradeoffs... I'm just not convinced they'd build a HUGE chip that's double the size.
i could easily see their 780 being ~2400 cores, but not much higher than that.
and why not use dual gpu for cuda. i thought that servers wouldnt even care how many gpus they have since they are not limited to 4 like windows for home is. this is the first time theve had such a small chip that had great perf/mm2
do you think they would take the chip as is, or need to make some serious modification for cuda apps?
There is much much more to GPGPU performance than the number of cores. Two GK104 put together would not cut it, not at all.
I concur, I think that they will have to re-work it and I'm not entirely sure the GPGPU part will be a 3000+ core part, I think the 2400 core part could potentially be more plausible (I was thinking more like 2034).
The worst part is that I think people are confusing the dual GK-104 part with the GPGPU one being shown at GTC. I don't necessarily think they'll be one in the same.
So, if single core Kepler is two slots, will dual-core Kepler be 3 or 4 slots?
What waste? If you designed the GPU with multicore in mind then there's no reason for tons of wasted resources.
Beyond that he didnt mean "dualcore" as in two cores glued together...
Exactly, I should have said two-GPU, sorry.
If you design such a large die could it not be possible to design it with salvage in mind rather than redundancy - ie. instead of having duplicate parts to ensure designs can meet the minimum specs which means an increase in die size, could you not design a chip so that it could be cut into smaller die and still sold? The increased interconnect logic on a fully working die may not be much greater than the otherwise redundant logic, and the percentage of fixed logic required by a GPU is shrinking each generation so the parts replicated to ensure dies could be cut would have less and less of an impact, and could be countered by the increased yield of saleable stock.
This would allow as an example:
100% of die intact = gk100/110 - sold as Tesla at big bucks
small defect on one half - die can be cut into 1 fully enabled gk104 and one crippled gk104, or 1 cripped gk100/110 which could be sold as a Halo 'ultra' gaming card.
small defect on both halves - die can be cut into 2 crippled gk104.
large defect on one half - 1 fully enabled gk104, defective area discarded.
Excess gk100 and demand for gk104 - cut die in half to give 2 gk104.
Makes a large die design more feasable as there is far less waste, and the costs of design would be shared between the product lines. Is there any glaring reason I'm overlooking why this couldn't be possible?
Not sure if GTX launch or zombie apocalypse... Got a package from Nvidia a couple of days ago... they sent me a crowbar...