Platform brings low-latency AI to the edge

May 28, 2019 //By Julien Happich
edge platform
Nvidia’s EGX accelerated computing platform was created to enable companies perform low-latency AI at the edge and act in real time on continuous streaming data between 5G base stations, warehouses, retail stores, factories and beyond.

EGX edge platform starts with the tiny Nvidia Jetson Nano which in a few watts can provide one-half trillion operations per second (TOPS) of processing for tasks such as image recognition. And it spans all the way to a full rack of Nvidia T4 servers , delivering more than 10,000 TOPS for real-time speech recognition and other real-time AI tasks.

The GPU manufacturer partnered with Red Hat to integrate and optimize its N vidia Edge Stack with OpenShift, the leading enterprise-grade Kubernetes container orchestration platform. The Edge Stack is optimized software that includes N vidia drivers, a CUDA Kubernetes plugin, a CUDA container runtime, CUDA-X libraries and containerized AI frameworks and applications, including TensorRT, TensorRT Inference Server and DeepStream. It is optimized for certified servers and downloadable from the Nvidia NGC registry .

EGX combines the full range of N vidia AI computing technologies with Red Hat OpenShift and N vidia Edge Stack together with Mellanox and Cisco security, networking and storage technologies. This enables companies in the largest industries — telecom, manufacturing, retail, healthcare and transportation — to quickly stand up state-of-the-art, secure, enterprise-grade AI infrastructures.
Nvidia - www.nvidia.com

 


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.