MENU

Asynchronous distributed deep learning technology

Technology News |
By Ally Winning

The researchers are investigating a training algorithm to obtain a global model as if it is trained by aggregating data in a single server, even when the data are placed in distributed servers, such as in edge computing. The proposed technology, say the researchers, has both academic and practical interest, and enables users to obtain a global model – a trained model that uses all the data at a single place – even when (1) statistically nonhomogeneous data subsets are placed on multiple servers, and (2) the servers only asynchronously exchange variables related to the model.

Currently, machine learning – especially deep learning – generally involves training models, such as image/speech recognition, by aggregating data at a fixed location such as a cloud data center. However, in the IoT era, where everything is connected to networks, aggregating vast amounts of data on the cloud is complicated.

More and more people are demanding that data be held on a local server/device due to privacy issues. Legal regulations have also been enacted to guarantee data privacy, including the EU’s General Data Protection Regulation (GDPR). As a result, say the researchers, excitement is growing in edge computing that decentralizes data processing/storing servers for processing load and response time reductions on cloud/communication networks and for data privacy protection.

One technical challenge for this goal is to enable data aggregation/model training/processing in a decentralized manner. The researchers’ proposed training algorithm can obtain a global model even in situations where different/nonhomogeneous data subsets are placed on multiple servers and their communication is asynchronous. Instead of aggregating/exchanging data (e.g., image or speech) between servers, variables associated with each model trained on servers are asynchronously exchanged between servers and result in a global model.

The training algorithm is composed of two processes: a procedure that updates the variables inside each server, and a variable exchange between servers. In their experiments, the researchers used a ring network composed of eight servers, each of which used image data sets (CIFAR-10) composed of ten classes of objects (e.g., plane, car, bird, cat).

This image data set was divided into eight subsets, each of which was given to each server with statistical nonhomogeneity – specifically, only five out of ten classes are placed for each server. When data subsets in eight servers are aggregated, a complete training data set with ten classes is recovered. The results of their simulation experiments, say the researchers, show that a global model can be obtained even when using nonhomogeneous data subsets on asynchronous server communication.

Looking ahead, the researchers plan to continue research and development for commercialization. In addition, they say they will release the edge-consensus learning source code to promote further development of the technology as well as collaboration on applications.

For more, see “Edge-consensus Learning: Deep Learning on P2P Networks with Nonhomogeneous Data.”

NTT

Related articles:
Top 5 digital infrastructure technology trends for 2020
Intel, NSF invest in machine learning for wireless systems
Serverless edge computing platform for IoT in public beta

 


Share:

Linked Articles
eeNews Embedded
10s