A recent article in Business Insider stated, “AI isn’t part of the future of technology. AI is the future of technology”. AI and its related disciplines are moving rapidly from research labs to diverse real-world applications, with tangible benefits for consumers and innovative business models – from autonomous vehicles to natural language processing, and cognitive expert advisors.
The list of applications employing intelligent and computer vision crosses multiple industries and market segments. AI is already commonplace in both consumer applications, such as personal assistants, and in commercial use cases such as credit card fraud detection, security and robotics. AI is expected to be the next general purpose technology – a significantly disruptive long-term source of broadly diffused growth, which is likely to last for at least 75 years.
A significant section of this “intelligence” has primarily taken the shape of neural networks (NNs) for processing, segmenting and classifying images. NNs have proved themselves to be highly capable of multiple different tasks and producing fast accurate results, all while exceeding human capabilities. Open source frameworks such as Caffe and TensorFlow are enabling the dissemination and democratisation of NNs, creating a vibrant ecosystem of researchers and developers around them. The introduction of an NN API for Android will enable the industry to focus and accelerate the adoption of NNs even further.
Processing in edge devices
For NNs to do their job correctly, they first need to be trained. Typically, this is done ‘offline’ in the cloud and relies on powerful server hardware. The recognising of patterns of objects is known as inferencing and is done in real-time. It involves deploying and running the trained neural network model. Today, this stage is also performed in the cloud, but moving forward, due to scalability issues and to fully achieve AI’s potential it will need to be done at the edge – for example, on mobile and embedded devices. It is also driven by the increasing need for AI-enabled devices to operate remotely and/or untethered, such as drones, smartphones and augmented reality smart glasses.
Looking at connectivity in more detail, mobile networks might not always be available, whether they are 3G, 4G or 5G, not to mention the prohibitive costs involved to stream multiple simultaneous high-resolution video feeds. Therefore, sending data to and from the cloud and expecting a decision in real-time won’t be realistic. As such, it is now time to move the processing and deployment of NNs to edge devices. It is simply not practical to run them over the network due to the issues highlighted earlier – scalability, latency, sporadic inaccessibility and a lack of suitable security.