Fig. 7: Neural network training pipeline.
From: A myoelectric digital twin for fast and realistic modelling in deep learning

a Methodology used to build windows from the simulated MUAP template set for the pre-training phase. Each simulated template was 160 samples wide at a 2048 Hz sampling rate and with 130 channels. First, either a MUAP template was placed in the centre of the window or it was left empty at a 50% probability. Then MUAP templates from other MU classes were added to the window at a random offset to generate superpositions. Finally, standard normal distributed noise was added to the window, with the central 80 samples then paired with the label for supervised learning. b The neural network architecture and pre-training methodology used to improve the performance of a deep learning-based HD-sEMG decomposition algorithm. The neural network consists of a single gated recurrent unit layer, with predictions made using a 20-sample wide window of the hidden vector output, which is flattened before being passed to a sigmoid-activated densely-connected linear layer. In the pre-training phase a multitask learning regimen is used to optimise the parameters of the gated recurrent unit using the simulated sEMG. This pre-trained layer can then be used to improve the optimisation performance on real sEMG data.