PDA

View Full Version : NVIDIA Q&A Program



Amorphous
03-01-2010, 11:36 AM
We've been running a pretty successful program back at NVIDIA's forums, and we're looking to expand it to reach more people, and cover more questions.

If you have questions regarding NVIDIA, products, specs, technology, The Way Its Meant To Be Played, etc, post 'em here. We kick over the "top" questions and typically receive a response from NVIDIA product managers about once a month.

A few guidelines on questions:
No flaming. Like, seriously, I don't work for NVIDIA. I don't get paid to do this. If you've got a good question, don't bury it in fire.
This thread is for asking questions. I'll always post a new one with the responses so you can debate/discuss the responses.
NVIDIA does not answer questions about unreleased hardware. It's pretty rare that they'll tell me. Once they release the information, it's fair game of course. Just don't expect a response to a question about the GeForce GTX 980.
Try to ask a direct, complete question. The condensed version please! I know some questions require a lot of background information, just separate your question from it.
Make there's a question in there. Questions are typically followed by a ? I'm not telepathic, and I can't turn a collection of statements into a question.
This thread is NOT the place to attempt to diagnosis your specific hardware problem. Nor will NVIDIA give you advice on what specific card you need for your very specific situation.

We've already done a few rounds of this, so it's a good idea to see if your question has already been answered.


Amorphous
__________________________________________________
Previous responses:


2/19/2010

Q: Why leave the chipset business?

Tom Petersen, Director of Technical Marketing for SLI and PhysX: We will continue to innovate in integrated solutions for Intel's FSB architecture. We firmly believe that this market has a long healthy life ahead. But because of Intel's improper claims to customers and the market that we aren't licensed to the new DMI bus, it is effectively impossible for us to market chipsets for future CPUs. So, until we resolve this matter in court, we'll postpone further chipset investments for Intel DMI CPUs.

Despite Intel's actions, we have innovative products that we are excited to introduce to the market in the months ahead. We know these products will bring with them some amazing breakthroughs that will surprise the industry.


Q: Now that ATI has made it a standard feature, what is NVIDIA doing to support 3+ monitor gaming? How would it work with SLI? Now that this is a known feature, when will we see driver support for Surround gaming and 3D Vision Surround?

Andrew Fear, Product Manager for 3D Vision: GTX 200 or GTX 400 GPUs in SLI will provide triple monitor gaming support. Not only that, we'll also be supporting 3D Vision across the three panels, enabling a truly spectacular 3D gaming experience. We'll have more information on driver availability in the near future.


Q: Is NVIDIA working with Pande Group on OpenCL for a rumored new F@H GPU client?

Andrew Humber, Senior PR Manager for Tesla: The OpenCL client development effort is being driven by the Pande Group at Stanford so we should allow them to comment on its status. What we can say is that we are working closely with them on this and a number of other projects that will continue to deliver improvements in Folding@Home performance for NVIDIA GPU contributors. Our view is to support the Folding@Home effort, irrespective of their choice of API.


Q: How did you get so behind schedule on the Fermi? I just saw that it was delayed to 2010. How will you recover from lost sales to AMD/ATi?

Jason Paul, GeForce product manager: On the GF100 schedule—I think Ujesh Desai (our Vice President of Marketing) said it best when he said "designing GPUs is f'ing hard!" J With GF100, we chose to tackle some of the toughest problems of graphics and compute. If we merely doubled up on GT200, we may have shipped earlier, but essential elements for DX11 gaming, like support for scalable tessellation in hardware, would have remained unsolved.

While we all wish GF100 would have been completed earlier, our investment in a new graphics and compute architecture is showing fantastic results, and we're glad that we took the time to do it right so gamers can get a truly great experience.

Regarding "lost sales" -- despite some rumors to the contrary, we have been shipping our GTX 200 GPUs in mass and they continue to sell well. In fact, our overall GeForce desktop market share grew during the last quarter: http://www.pcper.com/comments.php?nid=8312



12/03/2009

Q: How do you expect PhysX to compete in a DirectX 11/OpenCL world?

Tom Petersen, Director of Technical Marketing: PhysX does not compete with OpenCL or DX11's DirectCompute.

PhysX is an API and runtime that allows games and game engines to model the physics in a game. Think of PhysX as a layer above OpenCL or DirectCompute, which in contrast are very generic and low level interfaces that enable GPU-accelerated computation. Game developers don't create content in OpenCL or DirectCompute. Instead they author in toolsets (some of which are provided by NVIDIA) that allow them to be creative quickly. Once they have good content they "compile" a specific platform (PC, Wii, Xbox, PS3, etc) using another tool flow.

During this process game studios have three basic concerns: Does PhysX make it easier to develop games for all platforms – including consoles? Does PhysX make it easier to have kick ass effects in my game? Will NVIDIA support my efforts to integrate this technology?And the answer to the three questions above is: yes, yes, and yes. We are spending our time and money pursuing those goals to support developers, and right now the developer community is not telling us that OpenCL or DirectCompute support are required.

In the future this may or may not change, and the dynamics of this situation are hard to predict. We can say this though: AMD and Intel are not investing today at the same pace as NVIDIA in GPU accelerated physics. AMD and Intel will need to do the bulk of the work required to support GPU accelerated PhysX on their products. NVIDIA is not going to do QA or design for AMD or Intel. At the end of the day, the success of PhysX as a technology will depend on how easy it is for game designers to use and how incredible the game effects are that they create. Batman: Arkham Asylum is a good example of the type of effects we can achieve with PhysX running on NVIDIA GPUs, and we are working to make the next round of games even more compelling. At this time, NVIDIA has no plan to move from CUDA to either OpenCL or DirectCompute as the implementation engine for GPU acceleration. Instead we are working to support developers and implement killer effects.

So does NVIDIA profit from all this? We sure hope so. If we make our GPUs more desirable because they do incredible things with PhysX, then we have done a great job for our customers and made PC gaming more compelling.


Q: Will PhysX become open-source?

Tom Petersen: NVIDIA is investing a lot of time and effort in PhysX and we do not plan to make it open source today. Of course the binaries for the SDK are distributed for free, and source code is available for licensing if game designers need it.



11/02/2009

Q: With AMD's acquisition of ATI and Intel becoming more involved in graphics, what will NVIDIA do to remain competitive in the years to come?

Jen-Hsun Huang, CEO and founder of NVIDIA: The central question is whether computer graphics is maturing or entering a period of rapid innovation. If you believe computer graphics is maturing, then slowing investment and "integration" is the right strategy. But if you believe graphics can still experience revolutionary advancement, then innovation and specialization is the best strategy.

We believe we are in the midst of a giant leap in computer graphics, and that the GPU will revolutionize computing by making parallel computing mainstream. This is the time to innovate, not integrate.

The last discontinuity in our field occurred eight years ago with the introduction of programmable shading and led to the transformation of the GPU from a fixed-pipeline ASIC to a programmable processor. This required GPU design methodology to include the best of general-purpose processors and special-purpose accelerators. Graphics drivers added the complexity of shader compilers for Cg, HLSL, and GLSL shading languages.

We are now in the midst of a major discontinuity that started three years ago with the introduction of CUDA. We call this the era of GPU computing. We will advance graphics beyond "programmable shading" to add even more artistic flexibility and ever more power to simulate photo-realistic worlds. Combining highly specialize graphics pipelines, programmable shading, and GPU computing, "computational graphics" will make possible stunning new looks with ray tracing, global illumination, and other computational techniques that look incredible. "Computational graphics" requires the GPU to have two personalities – one that is highly specialized for graphics, and the other a completely general purpose parallel processor with massive computational power.

While the parallel processing architecture can simulate light rays and photons, it is also great at physics simulation. Our vision is to enable games that can simulate the interaction between game characters and the physical world, and then render the images with film-like realism. This is surely in the future since films like Harry Potter and Transformers already use GPUs to simulate many of the special effects. Games will once again be surprising and magical, in a way that is simply not possible with pre-canned art.

To enable game developers to create the next generation of amazing games, we've created compilers for CUDA, OpenCL, and DirectCompute so that developers can choose any GPU computing approach. We've created a tool platform called Nexus, which integrates into Visual Studio and is the world's first unified programming environment for a heterogeneous computing architecture with the CPU and GPU in a "co-processing" configuration. And we've encapsulated our algorithm expertise into engines, such as the Optix ray-tracing engine and the PhysX physics engine, so that developers can easily integrate these capabilities into their applications. And finally, we have a team of 300 world class graphics and parallel computing experts in our Content Technology whose passion is to inspire and collaborate with developers to make their games and applications better.

Some have argued that diversifying from visual computing is a growth strategy. I happen to believe that focusing on the right thing is the best growth strategy.

NVIDIA's growth strategy is simple and singular: be the absolute best in the world in visual computing – to expand the reach of GPUs to transform our computing experience. We believe that the GPU will be incorporated into all kinds of computing platforms beyond PCs. By focusing our significant R&D budget to advance visual computing, we are creating breakthrough solutions to address some of the most important challenges in computing today. We build Geforce for gamers and enthusiasts; Quadro for digital designers and artists; Tesla for researchers and engineers needing supercomputing performance; and Tegra for mobile user who want a great computing experience anywhere. A simple view of our business is that we build Geforce for PCs, Quadro for workstations, Tesla for servers and cloud computing, and Tegra for mobile devices. Each of these target different users, and thus each require a very different solution, but all are visual computing focused.

For all of the gamers, there should be no doubt: You can count on the thousands of visual computing engineers at NVIDIA to create the absolute graphics technology for you. Because of their passion, focus, and craftsmanship, the NVIDIA GPU will be state-of-the-art and exquisitely engineered. And you should be delighted to know that the GPU, a technology that was created for you, is also able to help discover new sources of clean energy and help detect cancer early, or to just make your computer interaction lively. It surely gives me great joy to know what started out as "the essential gear of gamers for universal domination" is now off to really save the world.

Keep in touch.

Jensen


Q: How do you expect PhysX to compete in a DirectX 11/OpenCL world? Will PhysX become open-source?

Tom Petersen, Director of Technical Marketing: NVIDIA supports and encourages any technology that enables our customers to more fully experience the benefits of our GPUs. This applies to things like CUDA, DirectCompute and OpenCL—APIs where NVIDIA has been an early proponent of the technology and contributed to the specification development. If someday a GPU physics infrastructure evolves that takes advantage of those or even a newer API, we will support it.

For now, the only working solution for GPU accelerated physics is PhysX. NVIDIA works hard to make sure this technology delivers compelling benefits to our users. Our investments right now are focused on making those effects more compelling and easier to use in games. But the APIs that we do that on is not the most important part of the story to developers, who are mostly concerned with features, cost, cross-platform capabilities, toolsets, debuggers and generally anything that helps complete their development cycles.


Q: How is NVIDIA approaching the tessellation requirements for DX11 as none of the previous and current generation cards have any hardware specific to this technology?

Jason Paul, Product Manager, GeForce: Fermi has dedicated hardware for tessellation (sorry Rys :-P). We'll share more details when we introduce Fermi's graphics architecture shortly!


10/23/2009

1. Is NVIDIA moving away from gaming and focusing more on GPGPU? We have heard a lot about Fermi's compute capability, but nothing of how good it is for gamers.

Jason Paul, GeForce Product Manager: Absolutely not. We are all gamers here! But, like G80 and G200 before, Fermi has two personalities: graphics and compute. We chose to introduce Fermi's compute capability at our GTC conference, which was very compute-focused and attended by developers, researchers, and companies using our GPUs and CUDA for compute-intensive applications. Such attendees require fairly long lead times for evaluating new technologies, so we felt it was the right time to unveil Fermi's compute architecture. Fermi has a very innovative graphics architecture that we have yet to unveil.

Also, it's important to note that our reason for focusing on compute isn't all about HPC. We believe next generation games will exploit compute as heavily as graphics. For example:


Physical simulation – whether using PhysX, Bullet or Direct Compute, GPU computing can add incredible dynamic realism to games through physical simulation of the environment.
Advanced graphical effects – compute shaders can be used to speed up advanced post-processing effects such as blurs, soft shadows, and depth of field, helping games look more realistic
Artificial intelligence – compute shaders can be used for artificial intelligence algorithms in games
Ray Tracing – this is a little more forward looking, but we believe ray tracing will eventually be used in games for incredibly photo-realistic graphics. NVIDIA's ray tracing engine uses CUDA.
Compute is important for all of the above. That's why Fermi is built the way it is, with a strong emphasis on compute features and performance.

In addition, we wouldn't be investing so heavily in gaming technologies if we were really moving away from gaming. Here's a few of the substantial investments NVIDIA is currently making in PC gaming:


PhysX and 3D Vision technologies
The Way it's Meant to be Played program, including technical support, game compatibility testing, developer tools, antialiasing profiles, ambient occlusion profiles, etc.
LAN parties and gaming events (including PAX, PDX LAN, Fragapalooza, Million Man LAN, Blizzcon, and Quakecon to name a few recent ones) Attached are some links to videos from those event.
http://www.slizone.com/object/slizone_eventsgallery_aug09.html
http://www.nzone.com/object/nzone_quakecon09_trenches.html
http://www.nzone.com/object/nzone_blizzcon09_trenches.html
http://www.nzone.com/page/nzone_section_trenches.html

We put our money where our mouth is here.

Finally, Fermi has plenty of "traditional" graphics goodness that we haven't talked about yet. Fermi's graphics architecture is going to blow you guys away! Stay tuned.


2. Why Has NVIDIA continued to refresh the G92? Why didn't NVIDIA create an entry level GT200 piece of hardware? The constant G92 renames and reuse of this aging part have caused a lot of discontent amongst the 3D enthusiast community.

Jason Paul, GeForce Product Manager: We hear you. We realize we are behind with GT200 derivative parts, and we are doing our best to get them out the door as soon as possible. We invested our engineering resource in transitioning our G9x class products from 65nm to 55nm manufacturing technology as well as adding several new video and display features to GT 220/210, which put these GT200-derivative products later in time than usual. Also, 40nm capacity has been limited, which has made the transition more difficult.

Since its introduction, G92 has remained a strong price/performance product in our line-up. So why did we rebrand it? While hardware enthusiasts often look at GPUs in terms of the silicon core (i.e. G92) and architecture (i.e. GT2xx), many of our less techie customers instead think about GPUs simply in terms of performance, price, and feature set, summarized via the product name. The product name is an easy way to communicate how products with the same base feature set (i.e. DirectX 10 support) compare to each other in terms of price and performance. Let's take an example – what is the higher performance product, a 8800 GT or a 9600 GT? The average joe looking at an OEM web configurator or Best Buy retail shelf probably won't know the answer. But if they saw a 9800 GT and a 9600 GT, they would know that a 9800 GT would provide better performance. By keeping G92 branding current with the rest of our DirectX 10 product line-up, we were able to more effectively communicate to customers where the product fit in terms of price and performance. At the same time, we tried to make it clear to technical press that these new brands were based on the G92 core so enthusiasts would know this information up front.


3. Is it true that NVIDIA has offered to open up PhysX to ATi without stipulation so long as ATi offers its own support and codes its own driver, or is ATi correct in asserting that NVIDIA has stated that NV will never allow PhysX on ATi gpus? What is NVIDIA's official stance in allowing ATi to create a driver at no cost for PhysX to run on their GPUs via OpenCL?

Jason Paul, GeForce Product Manager: We are open to licensing PhysX, and have done so on a variety of platforms (PS3, Xbox, Nintendo Wii, and iPhone to name a few). We would be willing to work with AMD, if they approached us. We can't really give PhysX away for "free" for the same reason why a Havok license or x86 license isn't free—the technology is very costly to develop and support. In short, we are open to licensing PhysX to any company who approaches us with a serious proposal.


4. Is NVIDIA fully committed to supporting 3D Vision for the foreseeable future with consistent driver updates or will we see a decrease in support as appears to be the current trend to many 3D Vision users? For example. A lot of games have major issues with Shadows while running 3D Vision. Can profiles fix these issues or are we going to have to rely on developers to implement 3D Vision compatible shadows? What role do developers play in having a good 3D Vision experience at launch?

Andrew Fear, 3D Vision Product Manager: NVIDIA is fully committed to 3D Vision. In the past four driver releases, we have added more than 50 game profiles to our driver and we have seeded over 150 3D Vision test setups to developers worldwide. Our devrel team works hard to evangelize the technology to game developers and you will see more developers ensuring their games work great with 3D Vision. Like any new technology, it takes time and not every developer is able to intercept their development/release cycles and make changes for 3D Vision. In the specific example of shadows, sometimes these effects are rendered with techniques that need to be modified to be compatible with stereoscopic 3D, which means we have to recommend users disable them. Some developers are making the necessary updates, and some are waiting to fix it in their next games.

In the past few months we have seen our developer relations team work with developers to make Batman: Arkham Asylum and Resident Evil 5 look incredible in 3D. And we are excited now to see new titles that are coming – such as Borderlands, Bioshock 2, and Avatar – that should all look incredible in 3D.

Game profiles can help configure many games, but game developers spending time to optimize for 3D Vision will make the experience better. To help facilitate that, we have provided new SDKs for our core 3D Vision driver architecture that lets developers have almost complete control over how their game is rendered in 3D. We believe these changes, combined with tremendous interest from developers, will result in a large growth of 3D Vision-Ready titles in the coming months and years.

In addition to making gaming better, we are also working on expanding our ecosystem to support better picture, movie, and Web experiences in 3D. A great example is our support for the Fujifilm FinePix REAL 3D W1 camera. We were the first 3D technology provider to recognize the new 3D picture file format taken by the camera and provide software for our users. In upcoming drivers, you will also see even more enhancements for a 3D Web experience.


5. Could Favre really lead the Vikings to a Superbowl?

Ujesh Desai, Vice President of GeForce GPU Business: We are glad that the community looks to us to tackle the tough questions, so we put our GPU computing horsepower to work on this one! After simulating the entire 2009-2010 NFL football season using a Tesla supercomputing cluster running a CUDA simulation program, we determined there is a 23.468% chance of Favre leading the Vikings to a Superbowl this season.* But Tesla supercomputers aside, anyone with half a brain knows the Eagles are gonna finally win it all this year! J

*Disclaimer: NVIDIA is not liable for any gambling debts incurred based on this data.

xVeinx
03-01-2010, 11:08 PM
Some good stuff.