Researchers at Leti (Grenoble, France) have reported how the inherent variability of RRAM memory cells can be used to improve machine learning. The paper has been published in the January 2021 edition of Nature Electronics and is titled: 'In-situ learning using intrinsic memristor variability via Markov chain Monte Carlo sampling.'
Resistive random access memories (RRAMs), sometimes referred to as memristors, can be used for machine learning by using Kirchoff's current law to implement the dot-product or multiply-accumulate operation used in analog neural networks.
However, performing training or learning at edge is rarely done because of the energy cost of using von Neumann computing or – if using in-memory machine learning – the precision and accuracy required by back propagation algorithms. This has been thwarted by such as nonlinear conductance modulation, lack of multi-level conductance and device variability.
To get around that problem, the team developed a method that actively exploits the randomness, implementing a Markov Chain Monte Carlo (MCMC) sampling learning algorithm in a fabricated chip that acts as a Bayesian machine-learning model.
Bayesian probability is a refinement of conventional probability theory which assigns a probability to a hypothesis, which can be updated in the light of subsequent results and is therefore particularly relevant to machine learning.
In Leti's example, the in-situ learning is realized using nanosecond pulses and compared to a CMOS logic implementation requires five orders of magnitude less energy. As a result, this approach is capable of bringing learning to edge-computing systems.