Figure 5 | Scientific Reports

Figure 5

From: Deep Cytometry: Deep learning with Real-time Inference in Cell Sorting and Flow Cytometry

Figure 5

Regularization by L2 and dropout. Regularization is critical in balancing the trade-off between underfitting (bias) and overfitting (variance). The regularization techniques used in this model are L2 norm combined with dropout, which involve hyperparameters of L2 penalty multiplier and dropout keep probability (1 - dropout rate). By using random search, these two hyperparameters are explored, and the optimal point is used in the final training. The performance of regularization is evaluated by the last epoch validation cross entropy of the model with different pairs of regularization hyperparameters (each dot represents one pair of regularization pair). (a) The train cross entropy increases as either L2 multiplier or dropout rate is increased. (b) The validation cross entropy on the other hand is large at small L2 multiplier and dropout rate due to overfitting. The optimized regularization pair is determined by the minimal validation cross entropy.

Back to article page