Extended Data Fig. 5: Deep learning networks for image binarization, rectification, and generation. | Nature Materials

Extended Data Fig. 5: Deep learning networks for image binarization, rectification, and generation.

From: On-patient medical record and mRNA therapeutics using intradermal microneedles

Extended Data Fig. 5

(a) Image binarization network structure uses a U-Net adapted from an off-the-shelf Convolutional Neural Networks (CNN) for Biomedical Image Segmentation,. This is an image-based ConvNet, which is light and fairly accurate and easier to train with a certain amount of training examples. (DoubleConv: double convolutional layers; BN: batch normalization; ReLU: Rectified Linear Unit as the nonlinear activation function; and Skip connection: input or outputs from previous layers directly copied and then stacked as the input of current layer.) (b) 2D array region is rotated, cropped and resized a target size after the binarization step and before the rectification step. A rectangle with the minimum area that covers all the white bits of the 2D array region is generated. The final crop size is 35% larger than the size of minimum-area rectangle while preserving the center of the rectangle as a reference point. (c) A convolutional neural network (CNN)-based network structure is used for the image recognition model. (Conv: convolutional layer; BN: batch normalization; and ReLU: Rectified Linear Unit as the nonlinear activation function.) (d) RecognitionNet requires the microneedle patch size (NxN) as the input, while BinarizatioNet is independent of the microneedle patch size. (e) 650,000 simulated synthetic and paired train models were constructed for robust recognition network. Simulated fluorescence images were generated with potential image variations in three levels: 1) the quality of microneedles, 2) image acquisition, and 3) camera hardware and software. The previously used rectification network was applied to these synthetic models to output paired results, which were then input to a convolutional neural network (CNN)-based recognition network for it to learn the mapping of rectified images to binary arrays. (f) Examples of synthetic patch images with distortion, rotation, defocusing, motion blur, increased background noise, lowered contrast, and more. (g) The validation performances of the image binarization U-Net and image recognition CNN are 0.9297 and 0.9473, respectively, in terms of Sørensen–Dice coefficient (scale of 0 to 1; higher is better). The left plot shows the validation loss of the image binarization U-Net, and the right plot shows the validation loss of the image recognition CNN. The U-Net (for image binarization) and CNN (for image recognition) models resulted in signal retentions over 98% over 12 weeks with real-world pig images without fine tuning.

Back to article page