TENSAI Flow software helps development of ML applications

TENSAI Flow software helps development of ML applications

Uncategorized |
Eta Compute has announced the TENSAI Flow software suite that has been developed to assist design machine learning applications for IoT and low power edge devices.
By Ally Winning

Share:

“Neural network and embedded software designers are seeking practical ways to make developing machine learning for edge applications less frustrating and time-consuming,” said Ted Tewksbury, CEO of Eta Compute. “With TENSAI Flow, Eta Compute addresses every aspect of designing and building a machine learning application for IoT and low power edge devices. Now, designers can optimize neural networks by reducing memory size, the number of operations, and power consumption, and embedded software designers can reduce the complexities of adding AI to embedded edge devices, saving months of development time.”

The TENSAI Flow software quickly confirms a project’s feasibility and provides proof of concept. It features a neural network compiler, a neural network zoo, and middleware that includes FreeRTOS, HAL and frameworks for sensors, as well as IoT/cloud enablement.

“In order to best unlock the benefits of TinyML we need highly optimized hardware and algorithms. Eta Compute’s TENSAI provides an ideal combination of highly efficient ML hardware, coupled with an optimized neural network compiler,” says Zach Shelby, CEO of Edge Impulse. “Together with Edge Impulse and the TENSAI Sensor Board this is the best possible solution to achieve extremely low-power ML applications.”

The TENSAI Flow exclusive neural network compiler provides the best optimization for neural networks running on Eta Compute’s device, as well as low power usage. The middleware allows easy dual core programming by getting rid of the requirement of writing customized code to take full advantage of DSPs. The Neural Network Zoo quickens development with ready-to-use networks for the most common applications. These applications include motion, image and sound classification. This approach allows developers to just train networks with their own data. The insight gained from TENSAI Flow’s real world applications lets developers see the potential of neural sensor processors in terms of energy efficiency and performance in a variety of field tested examples.

In a comparison with direct implementation on a competitive device of the same CIFAR10 neural network, the TENSAI neural network compiler on TENSAI SoC improved energy per inference by a factor 54x. The CIFAR10 neural network from the TENSAI neural network zoo and TENSAI neural network compiler further improves the energy per inference to a 200x factor.

“Google and the TensorFlow team have been dedicated in bringing machine learning with the tiniest devices. Eta Compute’s TENSAI Flow is another step in the same direction and enables TensorFlow networks to run on Eta Compute’s ultra low power SoC, with the best optimization the company  can provide,” said Pete Warden, Lead of the TensorFlow Mobile/Embedded team at Google. “We welcome this initiative that sets new benchmarks for machine learning in edge devices and shows the dynamism of the TinyML field.”

TENSAI Flow’s interface with Edge Impulse allows training data to be acquired and stored to let customers train once and use real-world models for future development. The software automatically optimizes TensorFlow Lite AI models for Eta Compute’s TENSAI SoC. Using TENSAI Flow, TENSAI SoC can load AI models that include sensor interfaces seamlessly. TENSAI Flow also offers the ability to automatically provision and connects devices to the cloud and upgrades firmware over the air based on new models or data.

More information

www.EtaCompute.com

Related news

Low-latency 4K/HEVC encoder unit

Socionext ICs support high-resolution video designs

Inova Semiconductors licenses APIX3 to    Socionext

Micro-display for VR and AR wearable products

Linked Articles
eeNews Embedded
10s