hahah first you broke Ati fan boys ballz now you are after Nvidia fan boys...
Geeez you are on a ballz breaking spree :D
Printable View
forgot the name of the website... its the no1 site where employees rate and review the companies they worked or work for, plus they give their ceos approval ratings.
yeah i know... so what do you think? will fermi need a fast cpu or not? they showed it off with a 960...
well if the tdp numbers i heard are true, then its 50W more than a 285... and thats really a lot... i cant imagine what kind of a heatsink that occupies 2 slots is needed to keep that cool... i just wondered if that was only early silicon and if the newer stuff is running cooler...
thats not true, while a frame gets rendered there is constantly data written to and read from the mem... and that is NOT mirrored between the two gpus... otherwise both frames would end up identical...
both gpus get the same raw data, i guess, but they then use their memory and memory bandwidth independantly... if they would really mirror each others memory then you would have to split the memory into 2 partitions and the effective memory per gpu would actually drop in half
but why would you do that? why does gpu1 need to know what that gpu is doing with the data and what its frame will look like?its even worse, ive seen several sales people in shops telling people marketing nonsense i could SEE they knew was not true... but they dont care, they want to sell their stuff... i can understand it, but i wouldnt do it...
idk, i consider this an offline chat... as soon as something interesting is discussed i go back a page or two to catch up on what happened... i prefer too much info over not enough info that somebody thought was not important... and besides, even if there is no or little info, its fun to talk to others about tech, the companies that make them, their products... :D
Every frame that is rendered using AFR can only use the amount of memory on one card.Quoted from Mad Mod Mike on SLIzone.Quote:
The graphics memory is NOT doubled in SLI mode. If you have two 128MB graphics cards, you do NOT have an effective 256MB. Most of the games operate in AFR (alternate frame rendering). The first card renders one full frame and then the next card renders the next frame and so on. If you picture SLI working in this way it is easy to see that each frame only has 128MB of memory to work with.
what does that have to do with memory bandwidth?
i never said that you end up with double the memory, but you do end up with double the bandwidth from my understanding...
at the same time a dual gpu card is working on two frames, and each gpu can read and write independently to its own memory while working on those frames. as a result, in the same period of time, you end up with (up to) double the frames being rendered, and (up to) double the reads and writes to memory. just think about it... you cant produce additional frames without additional reads/writes to memory...
and think about the real world performance of dualgpu cards... if you would just double the shaders i dont think we could be able to get as much of a performance boost as we see from single to dualgpu cards
what your saying is that both gpus only use the memory of one of the two cards... which makes no sense... rendering a frame takes several steps, you read from memory, manipulate the values and write back to memory... as far as i know its impossible to render two different frames if you force the memory of both gpus to be 100% identical at all times... if you would do that then youd end up with 2 identical frames...
so for dualgpu solutions the shader power is doubled, the triangle setup is doubled, texturing power is doubled, memory bandwidth is doubled... but you need double the memory compared to a single gpu to have the same effective memory capacity. and another downside is that you need more cpu power and loss of efficiency when aligning both gpus to work on the same scene...
what i wonder about is that the cpu requirements for dual gpu setups are not double of what a single gpu setup requires. how come?
its definately higher, but not double, at least not in most scenarios... does anybody know why?
no news on fermi tdps?
Why should it be double? If with say, a 5870 the CPU was a bottleneck, then it would require double power when you add a second 5870. In this age of console games 5870 can be bottlenecked by the CPU quite often, I accept that but when you're bound by the CPU you are at 100 FPS levels so you wouldn't plug in a second card anyway.
As long as the limiting factor is the graphics card I don't think a second card would require double CPU power.
GTX480 to debut at Cebit: http://www.fudzilla.com/content/view/17586/1/
i broke i ended up with a 5870 couldant wait any longer
http://i45.tinypic.com/20kx46c.jpg
http://i46.tinypic.com/dchp1.jpg
More G92!
The 9600gso 55nm cards with a new name.
8800gt = 9800gt = GT 240 = GT 340
identical, equal performance, different names, 2 of which imply there is performance upgrade
Nice. Rise of the Undead Part IV.
die g92 dieeeeeeeeeeeeeeeeee
gtx 580 g92 edition :p:, thank god for the tags.
lol, i don't remember any g92 cards with 768mb of memory.... and gpu-z says it's a gt330 not a gt340.
a lot of gimping going on at nVidia. Its like a Frankenstein GPU, PCB garage sale.
look at my quote in Jowy Atreide;s sig.
A history lesson in complacency.
Like no other industry in the history of the world, computers ushered in dramatic increases in performance and functionality and unheard of price reductions. Competition is fierce. Those who take a break, and fail to push the boundaries are doomed to be amongst the forgotten has-beens: Cyrix, 3Dfx, VIA, S3, Abit.
4 years of milking Athlon64/Opteron sales, and a delayed Barcelona with TLB Bug almost crushed AMD.
That's why nVidia's 2007-2010 rebrandfest is concerning. Sure, way back when before 8800GT, you could argue that DX10.1 was a novelty. But time goes by fast. A hush-hush DX10.1 GT240 rollout, 2 MONTHS AFTER AMD launched DX11 cards... pathetic. Just because you were making money yesterday, doesn't guarantee future revenue.
Its a mystery that nVidia alone has taken upon themselves to sabotage graphics progress. Its time to get act together. Optimus is a great start and should be in EVERY notebook.
No more 5-7 month delays for launch of DX11 Fermi mainstream and value derivatives. Bad Company 2 is coming out in 20 days. Hup-too-hup-too double time soldier!
yeah, they just magically rebranded directX 10.1 into their chips too. 40nm doesnt count for anything either? i guess the definition of rebrand has changed. if thats the case then many chips are just rebrands and no one should buy those. in my opinion if it still uses silicon its a rebrand.
you're contradicting yourself... and how exactly is this GF100 news?
:clap:
Jowy A, shouldn't you be mad at ati for rebranding the 4870 as a 5770? just because it's built on 40nm and has dx11 doesn't mean it's new!
1. No, I say GT 240 the second time, first is GT 230.
2. it's not GF100 news, read the thread title! The GT300 thread. GT330 :yepp:
3. because it's built on "40nm and has dx11 doesn't mean it's new", yes actually. It does.
Thanks for being an ass, it just makes your this response so much tastier.
EDIT: nvm, had a good response, but i'm not going to feed the troll anymore...
charlie has been saying nvidia yields are bad since g92 although yields probly arent good if die size rumors are true.
http://www.nvidia.com/object/product...gt_240_us.html
http://www.nvidia.com/object/product...9800gt_us.html
340 doesnt look to be a rebrand because gt 240 is 4.8GPixels and wider bus.
Well his release dates have been a lot more accurate than any other's
die size rumours???
what is there to question??
nVidia openly disclosed that Fermi is 3B transistors.
We also know G200 was 1.4 billion transistors covering a 576mm2 die surface area built on a 65nm process.
Put 2 and 2 together. Its a virtual certainty that Fermi die size will be very large (about as big), and the corresponding heat, power, and yield issues that come with that.
Nobody in the history of silicon has doubled yields by making a 2x bigger chip.
FUN nVidia GPU CLOCK STAT:
Remember GF2.. 175, 200, 225Mhz
GF4.. 250, 275, 300Mhz
GF6.. 325, 350, 400, 425Mhz - notice a pattern yet?
GF8.. 513? 575, 600, 612, 650 - ok a few bad ones..
G200... 633, 602, 612, 648, 576, 738... umm.. what was wrong with nice even spaced 25Mhz increments?
according to wikipedia (ie 9 year old):
Code:GeForce GTX 470 March 2010 GF100 40 3000 576? 1 PCI-E x16 2.0 1280 448:56:40 GDDR5 320 11 3.2
GeForce GTX 480 March 2010 GF100 40 3000 576? 1 PCI-E x16 2.0 1536 512:64:48 GDDR5 384 11 3.2
actually those 40nm chips clock worse than the 55nm chips dont they?
dx11 is enough to make it a different part in my books... its pretty irrelevant, at least now, but its a real change... 10.0 to 10.1 isnt really... if you look at the changes they had to make to go from .0 to .1 its really more of a revision, not a new part... if even that... i mean the 260 v2 with more sps was a more notable improvement and bigger change...
5770 is a 4870 with lower temps, it clocks better, and costs less, and has dx11
nvidia has lower temps too, but worse clocks, same or HIGHER costs and no dx11
i think calling it a g92 rebrand is the nice thing to do here... cause g92 was a kick4ss product, if you call this 10.1 chip a new chip, you would have to call it a square silicon piece of F A I L... calling it a rebrand sounds much more favorable :D
okay, so my sarcasm was lost on everyone. :(
here's the deal, the gt240 is gt200 based.
http://i71.photobucket.com/albums/i1...00gtvgt240.jpg
the 9800gt's g92 and the gt240's gt215 probably have more differences than the 4870 -> 5770. yeah, the 5770 has dx11 and all of that jazz, but it's still basicly a modified and shrunk 4870. the gt215 is meant to fill the market UNDER g92 products... ironic isn't it? i'm not trying to defend the gt240, it's only remarkable feature is a 9w idle draw, but calling it an 8800 or 9800gt rebrand is just silly. :shrug:
sorry, but what exactly is the difference between the g92 and g200?
nvidia bumped up the specs to almost double, but they used the same building blocks, right?
so nvidia has pumped up their design and then cut it back to bellow where they started from and revised some features allowing them to place more checkmarks in the dx table giving them a 10.1 instead of 10.0...
ati went from dx10.1 to dx11 and reworked the way textures are filtered and how aa works afaik... thats def a bigger difference imo...
yes, now in what way is that rebranding cards? to me it looks like they're filling a gap in the lower end of the market place. i could care less if you don't like the card or nvidia's market strategy, i don't like it either, but let's not make this out to be something it's not.
now let's all smoke a bowl and relax.... :up:
everyone grab some weed and chil out please
in the 40nm 10.1 parts its not rebranding cards, its rebranding tech...
its almost the same as rebranding cards...
the only reason anybody does this is to artificially increase the value of its products beyond what its really worth... and how anybody can say theres nothing wrong with that is beyond me... your basically defending or cheering for somebody whos bending you over... i guess some people are into getting bent over ^^
i think they are on something harder than weed... :D
nvidia marketing and sales seem more like your average speed/cocain fr3ak than a chillaxed weeder...
These are computer parts, everything is recycled, rebranded, regurgitated, cut down, binned, changed slightly and push off as the next great thing year after year.
I mean they add a couple mb of cache to a cpu and you have a totally new product, add a higher bin to memory and you now market uber speed memory, change the bezel on an lcd panel and different companies can market the same core product, wrap a new shell on a mouse, add a fan hole to an existing case design.
Everything gets blown way out of proportion....
Jesus. I think this is getting out of hands. DIE-shrink to 40 nm but same "tech" != rebranding IMHO. For me it's only rebranding when the GPU doesn't change at all, eg 8800 GT => 9800 GT.
A DIE-shrink to 40nm is not re branding. It is a new GPU and deserves a new name to indicate the lower power usage, cooler temp, etc .. .
Using GT 3xx naming will separate it from the new GF100-series too, and it should satisfy everybody. It is not like those famous nVidia re branding any longer.
This card cant come any faster, my 5870 has the grey screen issue, even with new drivers that are targeted to fix it, the issue is still present, i will stay away from ATI for my computers, i have 3 pc's, 1 gtx 285 2gb, 1 gtx 275, 1 with 5870, only one that has problems is the ati system which is my main gaming rig, also the fan sounds like a leaf blower. Im going to jump on this asap.
You are supposing that Fermi launch will be trouble free...
http://en.wikipedia.org/wiki/RebrandingQuote:
Rebranding is the process by which a product or service developed with one brand, company or product line affiliation is marketed or distributed with a different identity.
If the product changes for example die-shrink or hardware based for example DX11 then it can't be called rebranding. Rebranding is the exact same product with a different identity or name.
I don't agree how Nvidia plays the game with its Geforce, like saaya already said AMD has been doing a better job with its Radeon.
the flickering issue you speak of has a work around...just use the new version of rbe.
we even have members here http://www.xtremesystems.org/forums/...d.php?t=244126 and on techpowerup http://forums.techpowerup.com/showpo...8&postcount=80
seems increased voltage in 2d solves the issue....
grey screen issue, never had one on my 5870.... i would rma your card if i were you, check your vrm temps when it grey screens...
so your stating your going to switch to the dark side... i sense the force is weak with you young padawan.... ;)
well i currently have problems with nvidias sh_tty drivers can't install control panel for some reason lol now i regret my choice of not buying 5850
What price did the 8800gtx and the gtx280 launch at? Trying to work out how much to save
It will cost probably like a 5970, so around 750 $
I hope the water blocks are available soon after launch.
currently 5970 is way overpriced because it has no DX11 competition. I don't think Nvidia will be able to practice similar price gouging when they launch the 4XX series.
everybody said the same about the 295 and 285 cards being overpriced cause there was nothing in the same perf class that competed with them, so ati would go for the same or lower prices... but they didnt...
i wouldnt be surprised if nvidia launches fermi at pretty high prices... after all it WILL be the fastest single card solution on the planet, and they need the money... :D
well then that says it all...
it will NOT beat a 5970 and it will probably be as fast as a 295 which costs the same right now, maybe itll be a little faster...
so people can now get 295 performance with a single gpu...
i doubt people on a 295 will upgrade to this... they will probably wait for the dual fermi card. people who get a fermi 480 are probably on a 8800gtx 8800ultra 9800gtx 250 260 275 280 or 285 right now... or they are on a 5850 or 5870...
which troll has been messing with the tags... again. Going against admins is never cleaver
This stayment may be somewhat close to the truth... in current titles. As newer engines and upcomming games using liberal tesselation come about, I fully epect the 480 to pull ahead of the 5870/GTX295 by a fair margin. However where launch titles are concerened and past games, I don't think it will have much of a lead if at all. All this said, I am treating GF100 as a long(er) term invesment, not simply a day 1 upgrade. Nvidia are playing the same card AMD has in the past ( designing a gpu with more shader power vs texture power ; tradtionally they've gone for a more texture heavy approach ) Hopefully it turns out well for everyone.
I personally expect to see the 480 launch at $549 USD. As far as what vendors will actually charge for them intially... who the hell knows :p:
Forgive my ignorance, please don't RTFF me, but what exactly are these parts going to be:
GeForce GTX 470
GeForce GTX 480
Will they both be single fermi CPU cards with a few clock cycles inbetween them or will the 480 be a dual GPU (unlikely I know)
Thanks kindly for the info.
This 480 could be the one to go for in 6 weeks then, even if it's slightly slower than the HD5970 on games which scale well with crossfire (such as Stalker series).
For most existing DX9/10 titles I predict the 480 will be the best all-rounder card, since the HD5970 is "crossfire on a card" - a technology that is inherently limited in terms of number of titles strongly benefiting.
But I've been wrong before and I am still using two HD4870 512MB cards in crossfire.
If you look at things that way, then perhaps its better to start looking at Fermi as a competitor to 6800 series? Since they will only be about half a generation apart and both completely new architectures.
If Fermi only considerably beats 5800 series in tesselated DX11 games , then a "smart" investor in future technology would definitely wait for the 6800 series to make his decision.
That's kinda my point though, most people want a graphics card fast for now, today, so they shouldn't be thinking too much about how gaming will be late 2010 and early 2011. Just buy what's fastest for today's games, be it tesselated or not. If you start pondering too much about how well Fermi will run tesselated games and that you should buy it for such games then you might as well wait for 6800 series as well.
I understand waiting for Fermi cause it might beat 5800 series in every way for todays games, but waiting for it cause you will have a future proof product then you might as well wait longer.
if you have been around in the hw world for a few years you know that it makes no sense whatsoever to buy a videocard as a longterm investment, ESPECIALLY if its the first generation of a new dx standard... by the time that standard is actually used there will be MUCH faster AND cheaper cards out...
so yes, im sure it will pull ahead once tesselation is used widely... but when that will be... who knows... could be the end of 2010, could be 2012... and even if its the end of 2010, by then there should be cards with better price/perf from both camps afaik... so it really doesnt make much sense to buy a fermi betting on it performing better in the future when tesselation is used more widely...
The maximum a high end graphics card may last is 3 years which must be followed by the operational system which is the most important in this role.
The first Nvidia graphics card to support DX10 was the Geforce 8 series which launched in November 2006, at that time Vista had been released 2 months later January 2007. The Radeon 5800 Series was released in September 2009 and the Windows 7 a month later October 2009, this is the key point for a successful marketing strategy, rely on the future market share of other companies in this example both Nvidia in the end 2006 with Windows Vista and now AMD's evergreen series 5800 with Windows 7. I call it a successful human combo buying habit, not to mention that AMD is doing a better job giving alternative options to its customers. Nvidia with its Geforce 8 series took 6 months to deliver mainstream based products from November 2006(8800GTX) to April 2007(GeForce 8600 GTS). AMD took 1 month releasing the 5700 series, from September 23, 2009 (5800 series) to October 13, 2009 (5700 series). It clearly shows Nvidia's hungry money appetite at that time.
I was expecting Fermi to be launched before the Windows 7 again. It looks like something got wrong somewhere. Nvidia does not have any backup plan, if something delays it delays much more than any other company that has backup plans for occasions like this. It looks like AMD graphics division took the lead and its market share only tends to grow.
I'm certain that Nvidia will launch only high end graphic cards to cash in and then after some months mainstream graphic cards.
I still think AMD is overpriced selling its 5xxx series, the 4xxx is a much wiser buy but then again the Win 7 DX11 syndrome is here and real.
AMD is charging what they can for HD5k due to the lack of competition. Can you blame AMD for that? They need every Dollar they can get (more than many other companies).
i think ati could have sold quite some additional 5800 cards if they would have reduced the price 50-100$ which would have resulted in higher net profits... but they were yield limited, so it made perfect sense not to lower the price...
now that fermi is around the corner cutting the price doesnt make much sense cause people will wait for fermi before buying a new card...
so as soon as fermi launches, 5800 prices will probably drop notably... 299$ is my guess...
ooor, if ati is smart, they will replace the 5870 with a 5890 and try to keep the same price... 350-399$
Well on the matter of backup plan, I don't think any of the companies playing with the silicon have much of a backup plan. Look at ATI with the r600 and its delays, look at Intel with the P4, look at AMD with Phenom 1.
Sometimes issues result in product delays, less than expect performance or both but sometimes things are very successful from the start.
Then you have Athlon 64, Phenom2, Core2, G80, & RV770 which would be arguably very successful hardware from launch.
+1
One really bad one comes to mind - Nvidias FX series effectively allowed ATI to swallow up a big chunk of market share with their 9700 series. Up until then Nvidia were looking like they had it wrapped up. History doesn't always repeat itself - there's no way to know how this gen will pan out until we see the products.
I personally think this is a bad time for graphics card makers - there just aren't enough games out there that need the power. I used to get very excited by new gens of graphics cards, but these days, what I have in my rig is already more than enough.
there usually are backup plans... we just never notice it cause the decision to go for either one is made way before release...
for example, xenos was originally planned as a desktop part but was canned because of performance shortcommings, and instead they just doubled up on their previous desktop gpu...
and look at nvidia, gt200 is nothing but an (almost) double pumped G92... im sure they didnt plan that originally, but many times doubling up a current design gives you roughly the same performance as a new design, and its known to work so its less risky. and nvidia actually DID have a backup plan afaik, 40nm G92 and G200, its just that those didnt work out... it was a poor backup plan cause shrinking from 55 to 40 is very difficult from what i heard.
amd K9 never made it either... intel tejas was canned... intels celeron SOC in the late nineties was canned... and once again in the early 2000s they tried it again and cancelled it... intel using imagination technologies gpus in their chipset is OBVIOUSLY a backup plan because their own gpu solutions didnt work out as planned to cover some segments...
Isn't this thing hitting the markets in March? Why we don't see any decent benchmarks and reviews?
Let's take my example for this matter when I said Nvidia does not have backup plans. If you were the Nvidia's CEO what would you do? just remember that in that position there are many secrets that only higher ups know, it is like playing cards hand by hand, you don't know everything but you know many things. You know quite well that Windows 7 is coming and will use DX11 (which I think is pretty basic since most of us knew that) and also that the competition will use that as an advantage to take the lead. Will you let it go or will you do something about it? This situation occurred months before the launch of the Windows 7. You as a CEO had a choice right? it is not like you did not have one.
I as a CEO, If fermi was not ready for that month then I would play their game and do the same, release something with DX11 support and or something extra even if that is not as powerful as the 5800 series this new model would compete with the mainstream 5700 series or would give customers the DX11 feeling that they wanted and personally I think that this would not have weaken the brand as it is.
What about a G92 with DX11 only to compete? That is my point, don't let the competition go on a rampage.
I gotcha, it really does seem like they fell asleep at the wheel in regards to getting any dx11 hardware out the door. Both ATI and Nvidia seem to release the latest on the highend part first before releasing the lower parts so with gt300 delay's you would have thought some midrange to lower end parts would have been scheduled to release by now.
Why would that be the case? It seems AMD is dead set on using 28nm on its next graphic tech, which means at the minimum its going to be a 2011 part. That's a substantially longer wait. The sketchyness of 28nm just makes it actually timeline even less reliable.
Even if the gtx 480 is as fast as its hyped up to be, then AMD can still make money selling it for a price lower than it is at the moment. E.g 5870@ 299 5850@229-249 and the 5970 at $549 or $499. Both All of AMD parts right now are small chips and thus can be sold at much lower costs and still make a profit.
AMD is in no rush, the 299 and under market is quite safe for them and its actually quite a profitable area.
But in terms of upgrading for the sake of playing directx 11 games, waiting is actually not a bad idea simply because the lack of games at the moment. Dirt 2 really isn't that good and alien vs predator seems sketchy.
The impression I've gotten is that ATI is going for a refresh of 40nm parts (ie, same architecture) in the second half of this year, while intending to release the new architecture on 28nm (though likely to release some low end parts first on 28nm to test out the process) in the first half of next year.
To be on topic however, Fermi won't be up against anything but the current architecture when released. An ATI refresh might yield some improvements, but it's anyone's guess where things will end up, though I imagine they'll be working on some tessellation improvements to shore up things between the 5800 series and Fermi.
Alex Sakhartchouk, Demo Engineer at NVIDIA, explains the technologies behind NVIDIAs new Rocket Sled demo running on the new Fermi architecture
http://www.youtube.com/watch?v=HjIzo...layer_embedded
Jensen doesn't believe in backup plans. (Paraphrasing a quote)
I wouldn't say yields are bad, TSMC didn't have enough capacity at the time. Simple economics, low supply and high demand = high prices.
AIBs are just getting their cards... Give it another week or two.
Not that simple. These designs are being worked on for 3-4 years before they are released. This isn't a simple, hey let's throw DX11 into the architecture and get it out next month since GF100 is having problems. There isn't much you can do as far as a backup plan in this industry the R&D costs are just way too high as it is.
Sketchy? I think not, at least for GF. TSMC, on the other hand, god help them.
http://www.globalfoundries.com/pdf/G...ldBrochure.pdf
Please note proof#5 on page 2.