hahah first you broke Ati fan boys ballz now you are after Nvidia fan boys...
Geeez you are on a ballz breaking spree :D
Printable View
forgot the name of the website... its the no1 site where employees rate and review the companies they worked or work for, plus they give their ceos approval ratings.
yeah i know... so what do you think? will fermi need a fast cpu or not? they showed it off with a 960...
well if the tdp numbers i heard are true, then its 50W more than a 285... and thats really a lot... i cant imagine what kind of a heatsink that occupies 2 slots is needed to keep that cool... i just wondered if that was only early silicon and if the newer stuff is running cooler...
thats not true, while a frame gets rendered there is constantly data written to and read from the mem... and that is NOT mirrored between the two gpus... otherwise both frames would end up identical...
both gpus get the same raw data, i guess, but they then use their memory and memory bandwidth independantly... if they would really mirror each others memory then you would have to split the memory into 2 partitions and the effective memory per gpu would actually drop in half
but why would you do that? why does gpu1 need to know what that gpu is doing with the data and what its frame will look like?its even worse, ive seen several sales people in shops telling people marketing nonsense i could SEE they knew was not true... but they dont care, they want to sell their stuff... i can understand it, but i wouldnt do it...
idk, i consider this an offline chat... as soon as something interesting is discussed i go back a page or two to catch up on what happened... i prefer too much info over not enough info that somebody thought was not important... and besides, even if there is no or little info, its fun to talk to others about tech, the companies that make them, their products... :D
Every frame that is rendered using AFR can only use the amount of memory on one card.Quoted from Mad Mod Mike on SLIzone.Quote:
The graphics memory is NOT doubled in SLI mode. If you have two 128MB graphics cards, you do NOT have an effective 256MB. Most of the games operate in AFR (alternate frame rendering). The first card renders one full frame and then the next card renders the next frame and so on. If you picture SLI working in this way it is easy to see that each frame only has 128MB of memory to work with.
what does that have to do with memory bandwidth?
i never said that you end up with double the memory, but you do end up with double the bandwidth from my understanding...
at the same time a dual gpu card is working on two frames, and each gpu can read and write independently to its own memory while working on those frames. as a result, in the same period of time, you end up with (up to) double the frames being rendered, and (up to) double the reads and writes to memory. just think about it... you cant produce additional frames without additional reads/writes to memory...
and think about the real world performance of dualgpu cards... if you would just double the shaders i dont think we could be able to get as much of a performance boost as we see from single to dualgpu cards
what your saying is that both gpus only use the memory of one of the two cards... which makes no sense... rendering a frame takes several steps, you read from memory, manipulate the values and write back to memory... as far as i know its impossible to render two different frames if you force the memory of both gpus to be 100% identical at all times... if you would do that then youd end up with 2 identical frames...
so for dualgpu solutions the shader power is doubled, the triangle setup is doubled, texturing power is doubled, memory bandwidth is doubled... but you need double the memory compared to a single gpu to have the same effective memory capacity. and another downside is that you need more cpu power and loss of efficiency when aligning both gpus to work on the same scene...
what i wonder about is that the cpu requirements for dual gpu setups are not double of what a single gpu setup requires. how come?
its definately higher, but not double, at least not in most scenarios... does anybody know why?
no news on fermi tdps?
Why should it be double? If with say, a 5870 the CPU was a bottleneck, then it would require double power when you add a second 5870. In this age of console games 5870 can be bottlenecked by the CPU quite often, I accept that but when you're bound by the CPU you are at 100 FPS levels so you wouldn't plug in a second card anyway.
As long as the limiting factor is the graphics card I don't think a second card would require double CPU power.
GTX480 to debut at Cebit: http://www.fudzilla.com/content/view/17586/1/
i broke i ended up with a 5870 couldant wait any longer
http://i45.tinypic.com/20kx46c.jpg
http://i46.tinypic.com/dchp1.jpg
More G92!
The 9600gso 55nm cards with a new name.
8800gt = 9800gt = GT 240 = GT 340
identical, equal performance, different names, 2 of which imply there is performance upgrade
Nice. Rise of the Undead Part IV.
die g92 dieeeeeeeeeeeeeeeeee
gtx 580 g92 edition :p:, thank god for the tags.
lol, i don't remember any g92 cards with 768mb of memory.... and gpu-z says it's a gt330 not a gt340.
a lot of gimping going on at nVidia. Its like a Frankenstein GPU, PCB garage sale.
look at my quote in Jowy Atreide;s sig.
A history lesson in complacency.
Like no other industry in the history of the world, computers ushered in dramatic increases in performance and functionality and unheard of price reductions. Competition is fierce. Those who take a break, and fail to push the boundaries are doomed to be amongst the forgotten has-beens: Cyrix, 3Dfx, VIA, S3, Abit.
4 years of milking Athlon64/Opteron sales, and a delayed Barcelona with TLB Bug almost crushed AMD.
That's why nVidia's 2007-2010 rebrandfest is concerning. Sure, way back when before 8800GT, you could argue that DX10.1 was a novelty. But time goes by fast. A hush-hush DX10.1 GT240 rollout, 2 MONTHS AFTER AMD launched DX11 cards... pathetic. Just because you were making money yesterday, doesn't guarantee future revenue.
Its a mystery that nVidia alone has taken upon themselves to sabotage graphics progress. Its time to get act together. Optimus is a great start and should be in EVERY notebook.
No more 5-7 month delays for launch of DX11 Fermi mainstream and value derivatives. Bad Company 2 is coming out in 20 days. Hup-too-hup-too double time soldier!
yeah, they just magically rebranded directX 10.1 into their chips too. 40nm doesnt count for anything either? i guess the definition of rebrand has changed. if thats the case then many chips are just rebrands and no one should buy those. in my opinion if it still uses silicon its a rebrand.
you're contradicting yourself... and how exactly is this GF100 news?
:clap:
Jowy A, shouldn't you be mad at ati for rebranding the 4870 as a 5770? just because it's built on 40nm and has dx11 doesn't mean it's new!