Fig. 2

Proof of concept on MNIST. Only a small set of the dataset is labeled whereas the rest of the samples only contain (biased) complementary labels. Leveraging additional samples with solely complementary labels improves segmentation performance over the baseline of supervised training on the small labeled dataset. Transition matrices \(Q_1\) and \(Q_2\) correspond to two different underlying conditional distributions of the complementary labels. The upper bound of supervised training on the full dataset, assuming we are given supervised labels for all samples, is depicted on the right.