https://www.tweaktown.com/news/59276...ind/index.html
NVIDIA founder and CEO, Jen-Hsun Huang took the stage as usual, saying: "At no time in the history of computing have such exciting developments been happening, and such incredible forces in computing been affecting our future. What technology increases in complexity by a factor of 350 in five years? We don't know any. What algorithm increases in complexity by a factor of 10? We don't know any. We are moving faster than Moore's Law". Huang unveiled NVIDIA's new TensorRT 3, which is a new inferencing platform that the company claims allowed a previously-trained DNN to run in a production environment capable of going through 45,000 images per second. This is all powered by the HGX server that has 8 x Tesla V100 accelerators. The big difference here is that NVIDIA's HGX server consumes 3kW to do this with the 8 x Tesla V100 accelerators, but compared to the traditional CPU-based platform that rocks 160 dual-CPU servers, which draws 65kW... making NVIDIA's GPU-based solution 2066% more power efficient. We can see why NVIDIA is pushing into the AI and DNN markets so hard.
Bookmarks