7nm migration improves ML inference engine power efficiency 10x

April 11, 2019 //By Nick Flaherty
Qualcomm brings AI inference processing to the cloud
Qualcomm Technologies is moving to a 7nm process to increase the power efficiency of inference engines for machine learning in the cloud.

The Cloud AI 100 will provide over ten times the performance per watt of the most advanced AI inference solutions deployed today, says Qualcomm.

“Our all new Qualcomm Cloud AI 100 accelerator will significantly raise the bar for the AI inference processing relative to any combination of CPUs, GPUs, and/or FPGAs used in today’s data centers,” said Keith Kressin, senior vice president, product management, Qualcomm Technologies. “Furthermore, Qualcomm Technologies is now well positioned to support complete cloud-to-edge AI solutions all connected with high-speed and low-latency 5G connectivity.”

Qualcomm Technologies is supporting developers with a full stack of tools and frameworks for each of the cloud-to-edge AI systems, including including PyTorch, Glow, TensorFlow, Keras, and ONNX. Using distributed AI models that will help enhance a range of uses, such as personal assistants for natural language processing and translations, advanced image search, and personalized content and recommendations.An all new, highly efficient chip specifically designed for processing AI inference workloads.

Microsoft Azure is a key partner for this. “Microsoft’s vision of cloud-to-edge AI emphasizes the benefits of distributed intelligence,” said Venky Veeraraghavan, partner group program manager, Microsoft Azure. “Collaboration continues between Qualcomm Technologies and Microsoft in many areas.”

The Qualcomm Cloud AI 100 is expected to begin sampling to customers in 2H 2019.



Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.