Weak Supervision refers to a training scenario where the model is trained on data with noisy, limited, or imprecise labels. Unlike strong supervision, where every data point has a precise label, weak supervision methods use partially labeled or noisy data. Techniques for weak supervision include leveraging heuristics, domain knowledge, or combining information from multiple weak sources to train models when obtaining fully labeled data is challenging or expensive.

Posts

Learning random-walk label propagation for weakly-supervised semantic segmentation

Large-scale training for semantic segmentation is challenging due to the expense of obtaining training data for this task relative to other vision tasks. We propose a novel training approach to address this difficulty. Given cheaply-obtained sparse image labelings, we propagate the sparse labels to produce guessed dense labelings. A standard CNN-based segmentation network is trained to mimic these labelings. The label-propagation process is defined via random-walk hitting probabilities, which leads to a differentiable parameterization with uncertainty estimates that are incorporated into our loss. We show that by learning the label-propagator jointly with the segmentation predictor, we are able to effectively learn semantic edges given no direct edge supervision. Experiments also show that training a segmentation network in this way outperforms the naive approach.