Crunching for Comrades and the Common good of the People.
yeah, still no idea how graphics/chipsets amdati compare to nvidia for total revenues.
nor why or where all amd's money goes...ie expenses ie expensive ie not profitable, whether cpu or graphics or both.
ie they must have a very large overdraft facilityor other creative accounting methods.
because 6 billion in revenues is a lot of $$, and i would surmise that there are some fat cats getting very rich in the amd camp. despite the creative financial statements.
Last edited by adamsleath; 05-21-2008 at 06:56 AM.
i7 3610QM 1.2-3.2GHz
took me 1 min to find:
year end 2007:
Graphics
Net revenue: 903 Million$
Operating income (loss): (100) Million$
http://www.amd.com/us-en/assets/cont...Financials.pdf
1st Quarter 2008:
Graphics
Net revenue: 203 Million$
Operating income (loss): (11) Million$
http://www.amd.com/us-en/assets/cont...Financials.pdf
3 weeks to go and no solid info?
Great job hiding the cards.![]()
4670k 4.6ghz 1.22v watercooled CPU/GPU - Asus Z87-A - 290 1155mhz/1250mhz - Kingston Hyper Blu 8gb -crucial 128gb ssd - EyeFunity 5040x1050 120hz - CM atcs840 - Corsair 750w -sennheiser hd600 headphones - Asus essence stx - G400 and steelseries 6v2 -windows 8 Pro 64bit Best OS used - - 9500p 3dmark11(one of the 26% that isnt confused on xtreme forums)
basically anything that uses the valve engine (which I belive BF2 does since its a steam game) runs extremely nicely with ati cards, which will also help them out.
But the way I see it, gt 200 is designed if anything to scare intel, which I bet it will until larabee comes out and even still larabee won't be able to compete with ati and nvidia's high end no matter how big intel's R&D fund is. The first generation is always generally a test to see how well certain aspects work, in the case of g80 and core 2, both performed above and beyond expectations, while the r600 and k10 while very modular, did not perform up to their expectations. I wouldn't be surprised to see a second/third generation larabee that can out perform the gtx 280 though.
So in short, gtx 280 will spank the rv770, but the rv770 was never meant to compete with it anyways. What'll be key is how well the MCM design works out for the r700 and how well the card scales (hopefully they won't need that bridge chip anymore as that definitely added costs and heat to the 3870x2)
No the BF2 engine is designed and programmed by the "engineers" at DICE IIRC. The engine features ugly looking bump maps and large environments. On the contrary, the Valve source enigine features nice looking bump/normal maps but tiny, tiny little low-poly environments
I wonder how larrabee will fare against nvidias/atis offerings at the time of release!
I think larrabee will be worse at raster rendering than the competitors offerings, but when ray tracing finally becomes available in games it will kick ass.
Crunching for Comrades and the Common good of the People.
I like how I give XS a crash course in ACC and now everyone is a pro...
thx for that.
for 2007, out of $2.262 billion gross margin AMD spent $1.847 billion on R&D
& graphics accounts for less than 1/6th (@0.903bn) amd total revenue for 2007...compared to nvidia's 3+billion...
so that's nv total revenue 3X amd graphics rev.
which could translate (on a per $ basis) into for every 3 nvidia cards sold, 1 ati card is sold, more or less.
Last edited by adamsleath; 05-21-2008 at 02:15 PM.
i7 3610QM 1.2-3.2GHz
AMD official press release on employing GDDR5
AMD (NYSE:AMD - News) today announced the first commercial implementation of Graphics Double Data Rate, version 5 (GDDR5) memory in its forthcoming next generation of ATI Radeon™ graphics card products. The high-speed, high-bandwidth GDDR5 technology is expected to become the new memory standard in the industry, and that same performance and bandwidth is a key enabler of The Ultimate Visual Experience™, unlocking new GPU capabilities. AMD is working with a number of leading memory providers, including Samsung, Hynix and Qimonda, to bring GDDR5 to market.
Today’s GPU performance is limited by the rate at which data can be moved on and off the graphics chip, which in turn is limited by the memory interface width and die size. The higher data rates supported by GDDR5 – up to 5x that of GDDR3 and 4x that of GDDR4 – enable more bandwidth over a narrower memory interface, which can translate into superior performance delivered from smaller, more cost-effective chips.1 AMD’s senior engineers worked closely with industry standards body JEDEC in developing the new memory technology and defining the GDDR5 spec.
“The days of monolithic mega-chips are gone. Being first to market with GDDR in our next-generation architecture, AMD is able to deliver incredible performance using more cost-effective GPUs,” said Rick Bergman, Senior Vice President and General Manager, Graphics Product Group, AMD. “AMD believes that GDDR5 is the optimal way to drive performance gains while being mindful of power consumption. We’re excited about the potential GDDR5 brings to the table for innovative game development and even more exciting game play.”
The introduction of GDDR5-based GPU offerings marks the continued tradition of technology leadership in graphics for AMD. Most recently, AMD has been first to bring a unified shader architecture to market, the first to support Microsoft DirectX® 10.1 gaming, first to lower process nodes like 55nm, the first with integrated HDMI with audio, and the first with double-precision floating point calculation support.
AMD expects that PC graphics will benefit from the increase in memory bandwidth for a variety of intensive applications. PC gamers will have the potential to play at high resolutions and image quality settings, with superb overall gaming performance. PC applications will have the potential to benefit from fast load times, with superior responsiveness and multi-tasking.
“Qimonda has worked closely with AMD to ensure that GDDR5 is available in volume to best support AMD’s next-generation graphics products,” said Thomas Seifert, Chief Operating Officer of Qimonda AG. “Qimonda’s ability to quickly ramp production is a further milestone in our successful GDDR5 roadmap and underlines our predominant position as innovator and leader in the graphics DRAM market.”
GDDR5 for Stream Processing
In addition to the potential for improved gaming and PC application performance, GDDR5 also holds a number of benefits for stream processing, where GPUs are applied to address complex, massively parallel calculations. Such calculations are prevalent in high-performance computing, financial and academic segments among others. AMD expects that the increased bandwidth of GDDR5 will greatly benefit certain classes of stream computations.
New error detection mechanisms in GDDR5 can also help increase the accuracy of calculations by indentifying errors and re-issuing commands to get valid data. This capability is a level of reliability not available with other GDDR-based memory solutions today.
http://forums.vr-zone.com/showthread.php?t=278539
'Final' Radeon HD 4800-series specs, launch details leak out
German website Hardware-Infos has obtained (translation here) what looks like an official document with "final" specifications, pricing information, and launch details for AMD's next-generation Radeon HD 4800-series graphics cards. This information echoes the June 18 launch date we heard last week, but it says both the 4850 and the 4870 will come out on the same day.
The Radeon HD 4850 will apparently feature 480 stream processors, a 625MHz core speed, an 825MHz shader speed, 512MB of 1143MHz GDDR3 memory, and a 114W thermal envelope. The faster Radeon HD 4870 will also have 480 SPs, but with an 850MHz core speed, 1050MHz shader speed, 1GB of 1935MHz (3870MHz "effective") GDDR5 RAM, and a 157W TDP. Both cards will also feature 256-bit memory buses and 16 raster operators, just like existing Radeon HD 3800-series models, but with twice as many texture mapping units (32 instead of 16).
Hardware-Infos says the Radeon HD 4850 will launch at $249 and the Radeon HD 4870 will be $349. If recently leaked performance numbers are accurate, the 4850 may be in the same playing field as Nvidia's ~$300 GeForce 9800 GTX.
http://www.techreport.com/discussions.x/14763
regards
[Review] Core i7 920 & UD5 » Here!! « .....[Review] XFX GTX260 216SP Black Edition » Here!! «
[Review] ASUS HD4870X2 TOP » Here!! « .....[Review] EVGA 750i SLi FTW » Here!! «
[Review] BFG 9800GTX 512MB » Here!! « .....[Review] Geforce 9800GX2 1GB » Here!! «
[Review] EVGA GTX280 1GB GDDR3 » Here!! « .....[Review] Powercolor HD4870 512MB GDDR5 » Here!! «
Not quite true... The Pentium D on 65nm didn't surpass a 90nm A64 x2. The Prescott didn't surpass the 130nm A64's either. Nor did the R600 surpass the G80. We also saw what happened with the RV670 vs G92.
Process can help you do more in a single chip, or lower power consumption, but it doesn't make for a better chip in all cases. Besides, by the time we see larabee, we'll see both ATi and NVidia on smaller processes than they presently are, and intel have stated larabee will be 10x the performance of their best IGP, which would only align it with the G80...
Bookmarks