Fig. 4: Hardware-algorithm co-optimization techniques to improve NeuRRAM inference accuracy.
From: A compute-in-memory chip based on resistive random-access memory

a, Various device and circuit non-idealities (labelled (1) to (7)) of in-memory MVM. b, Model-driven chip calibration technique to search for optimal chip operating conditions and record offsets for subsequent cancellation. c, Noise-resilient neural-network training technique to train the model with noise injection. The noise distribution is obtained from hardware characterization. The trained weights are programmed to the continuous analogue conductance of RRAMs without quantization as shown by the continuous diagonal band at the bottom. d, Chip-in-the-loop progressive fine-tuning technique: weights are progressively mapped onto the chip one layer at a time. The hardware-measured outputs from layer n are used as inputs to fine-tune the remaining layers n + 1 to N.