forgot the name of the website... its the no1 site where employees rate and review the companies they worked or work for, plus they give their ceos approval ratings.
yeah i know... so what do you think? will fermi need a fast cpu or not? they showed it off with a 960...
well if the tdp numbers i heard are true, then its 50W more than a 285... and thats really a lot... i cant imagine what kind of a heatsink that occupies 2 slots is needed to keep that cool... i just wondered if that was only early silicon and if the newer stuff is running cooler...
thats not true, while a frame gets rendered there is constantly data written to and read from the mem... and that is NOT mirrored between the two gpus... otherwise both frames would end up identical...
both gpus get the same raw data, i guess, but they then use their memory and memory bandwidth independantly... if they would really mirror each others memory then you would have to split the memory into 2 partitions and the effective memory per gpu would actually drop in half
but why would you do that? why does gpu1 need to know what that gpu is doing with the data and what its frame will look like?its even worse, ive seen several sales people in shops telling people marketing nonsense i could SEE they knew was not true... but they dont care, they want to sell their stuff... i can understand it, but i wouldnt do it...
idk, i consider this an offline chat... as soon as something interesting is discussed i go back a page or two to catch up on what happened... i prefer too much info over not enough info that somebody thought was not important... and besides, even if there is no or little info, its fun to talk to others about tech, the companies that make them, their products...![]()
Every frame that is rendered using AFR can only use the amount of memory on one card.Quoted from Mad Mod Mike on SLIzone.The graphics memory is NOT doubled in SLI mode. If you have two 128MB graphics cards, you do NOT have an effective 256MB. Most of the games operate in AFR (alternate frame rendering). The first card renders one full frame and then the next card renders the next frame and so on. If you picture SLI working in this way it is easy to see that each frame only has 128MB of memory to work with.
what does that have to do with memory bandwidth?
i never said that you end up with double the memory, but you do end up with double the bandwidth from my understanding...
at the same time a dual gpu card is working on two frames, and each gpu can read and write independently to its own memory while working on those frames. as a result, in the same period of time, you end up with (up to) double the frames being rendered, and (up to) double the reads and writes to memory. just think about it... you cant produce additional frames without additional reads/writes to memory...
and think about the real world performance of dualgpu cards... if you would just double the shaders i dont think we could be able to get as much of a performance boost as we see from single to dualgpu cards
what your saying is that both gpus only use the memory of one of the two cards... which makes no sense... rendering a frame takes several steps, you read from memory, manipulate the values and write back to memory... as far as i know its impossible to render two different frames if you force the memory of both gpus to be 100% identical at all times... if you would do that then youd end up with 2 identical frames...
Last edited by saaya; 02-08-2010 at 09:24 AM.
Intel Core i7-3770K
ASUS P8Z77-I DELUXE
EVGA GTX 970 SC
Corsair 16GB (2x8GB) Vengeance LP 1600
Corsair H80
120GB Samsung 840 EVO, 500GB WD Scorpio Blue, 1TB Samsung Spinpoint F3
Corsair RM650
Cooler Master Elite 120 Advanced
OC: 5Ghz | +0.185 offset : 1.352v
Originally Posted by motown_steve
Every genocide that was committed during the 20th century has been preceded by the disarmament of the target population. Once the government outlaws your guns your life becomes a luxury afforded to you by the state. You become a tool to benefit the state. Should you cease to benefit the state or even worse become an annoyance or even a hindrance to the state then your life becomes more trouble than it is worth.
Once the government outlaws your guns your life is forfeit. You're already dead, it's just a question of when they are going to get around to you.
so for dualgpu solutions the shader power is doubled, the triangle setup is doubled, texturing power is doubled, memory bandwidth is doubled... but you need double the memory compared to a single gpu to have the same effective memory capacity. and another downside is that you need more cpu power and loss of efficiency when aligning both gpus to work on the same scene...
what i wonder about is that the cpu requirements for dual gpu setups are not double of what a single gpu setup requires. how come?
its definately higher, but not double, at least not in most scenarios... does anybody know why?
no news on fermi tdps?
Why should it be double? If with say, a 5870 the CPU was a bottleneck, then it would require double power when you add a second 5870. In this age of console games 5870 can be bottlenecked by the CPU quite often, I accept that but when you're bound by the CPU you are at 100 FPS levels so you wouldn't plug in a second card anyway.
As long as the limiting factor is the graphics card I don't think a second card would require double CPU power.
INTEL Core i7 920 // ASUS P6T Deluxe V2 // OCZ 3G1600 6GB // POWERCOLOR HD5970 // Cooler Master HAF 932 // Thermalright Ultra 120 Extreme // SAMSUNG T260 26"Has anyone really been far even as decided to use even go want to do look more like?
GTX480 to debut at Cebit: http://www.fudzilla.com/content/view/17586/1/
Work Rig: Asus x58 P6T Deluxe, i7 950 24x166 1.275v, BIX2/GTZ/D5
3x2048 GSkill pi Black DDR3 1600, Quadro 600
PCPower & Cooling Silencer 750, CM Stacker 810
Game Rig: Asus x58 P6T, i7 970 24x160 1.2v HT on, TRUE120
3x4096 GSkill DDR3 1600, PNY 660ti
PCPower & Cooling Silencer 750, CM Stacker 830
AMD Rig: Biostar TA790GX A2+, x4 940 16x200, stock hsf
2x2gb Patriot DDR2 800, PowerColor 4850
Corsair VX450
i broke i ended up with a 5870 couldant wait any longer
My build is taking way to long
Quotes
dasickninja
Sweet merciful God... and it survived? Those parts are either Jesus or Juggernaut.
Nice. Rise of the Undead Part IV.
die g92 dieeeeeeeeeeeeeeeeee
gtx 580 g92 edition, thank god for the tags.
lol, i don't remember any g92 cards with 768mb of memory.... and gpu-z says it's a gt330 not a gt340.
a lot of gimping going on at nVidia. Its like a Frankenstein GPU, PCB garage sale.
look at my quote in Jowy Atreide;s sig.
A history lesson in complacency.
Like no other industry in the history of the world, computers ushered in dramatic increases in performance and functionality and unheard of price reductions. Competition is fierce. Those who take a break, and fail to push the boundaries are doomed to be amongst the forgotten has-beens: Cyrix, 3Dfx, VIA, S3, Abit.
4 years of milking Athlon64/Opteron sales, and a delayed Barcelona with TLB Bug almost crushed AMD.
That's why nVidia's 2007-2010 rebrandfest is concerning. Sure, way back when before 8800GT, you could argue that DX10.1 was a novelty. But time goes by fast. A hush-hush DX10.1 GT240 rollout, 2 MONTHS AFTER AMD launched DX11 cards... pathetic. Just because you were making money yesterday, doesn't guarantee future revenue.
Its a mystery that nVidia alone has taken upon themselves to sabotage graphics progress. Its time to get act together. Optimus is a great start and should be in EVERY notebook.
No more 5-7 month delays for launch of DX11 Fermi mainstream and value derivatives. Bad Company 2 is coming out in 20 days. Hup-too-hup-too double time soldier!
Last edited by ***Deimos***; 02-10-2010 at 03:55 PM.
24/7: A64 3000+ (\_/) @2.4Ghz, 1.4V
1 GB OCZ Gold (='.'=) 240 2-2-2-5
Giga-byte NF3 (")_(") K8NSC-939
XFX 6800 16/6 NV5 @420/936, 1.33V
yeah, they just magically rebranded directX 10.1 into their chips too. 40nm doesnt count for anything either? i guess the definition of rebrand has changed. if thats the case then many chips are just rebrands and no one should buy those. in my opinion if it still uses silicon its a rebrand.
you're contradicting yourself... and how exactly is this GF100 news?
Jowy A, shouldn't you be mad at ati for rebranding the 4870 as a 5770? just because it's built on 40nm and has dx11 doesn't mean it's new!
Bookmarks