Semantic Segmentation is a task that involves classifying each pixel in an image into a specific category, or class, based on the content it represents. Unlike object detection, which identifies and localizes objects within an image, semantic segmentation provides a more detailed understanding of the scene by assigning a label to every pixel. The goal is to segment the image into regions corresponding to different semantic categories, such as objects, structures, or background.

Posts

Improving Test-Time Adaptation For Histopathology Image Segmentation: Gradient-To-Parameter Ratio Guided Feature Alignment

In the field of histopathology, computer-aided systems face significant challenges due to diverse domain shifts. They include variations in tissue source organ, preparation and scanningprotocols. These domain shifts can significantly impact algorithms’ performance in histopathology tasks, such as cancer segmentation. In this paper, we address this problem byproposing a new multi-task extension of test-time adaptation (TTA) for simultaneous semantic, and instance segmentation of nuclei. First, to mitigate domain shifts during testing, weuse a feature alignment TTA method, through which we adapt the feature vectors of the target data based on the feature vectors’ statistics derived from the source data. Second, the ratioof Gradient norm to Parameter norm (G2P) is proposed to guide the feature alignment procedure. Our approach requires a pre-trained model on the source data, without requiringaccess to the source dataset during TTA. This is particularly crucial in medical applications where access to training data may be restricted due to privacy concerns or patient consent. Through experimental validation, we demonstrate that the proposed method consistently yields competitive results when applied to out-of-distribution data across multiple datasets.

Learning Semantic Segmentation from Multiple Datasets with Label Shifts

While it is desirable to train segmentation models on an aggregation of multiple datasets, a major challenge is that the label space of each dataset may be in conflict with one another. To tackle this challenge, we propose UniSeg, an effective and model-agnostic approach to automatically train segmentation models across multiple datasets with heterogeneous label spaces, without requiring any manual relabeling efforts. Specifically, we introduce two new ideas that account for conflicting and co-occurring labels to achieve better generalization performance in unseen domains. First, we identify a gradient conflict in training incurred by mismatched label spaces and propose a class-independent binary cross-entropy loss to alleviate such label conflicts. Second, we propose a loss function that considers class-relationships across datasets for a better multi-dataset training scheme. Extensive quantitative and qualitative analyses on road-scene datasets show that UniSeg improves over multi-dataset baselines, especially on unseen datasets, e.g., achieving more than 8%p gain in IoU on KITTI. Furthermore, UniSeg achieves 39.4% IoU on the WildDash2 public benchmark, making it one of the strongest submissions in the zero-shot setting. Our project page is available at https://www.nec-labs.com/~mas/UniSeg.

Domain Adaptation for Structured Output via Discriminative Patch Representations

Predicting structured outputs such as semantic segmentation relies on expensive per-pixel annotations to learn supervised models like convolutional neural networks. However, models trained on one data domain may not generalize well to other domains without annotations for model finetuning. To avoid the labor-intensive process of annotation, we develop a domain adaptation method to adapt the source data to the unlabeled target domain. We propose to learn discriminative feature representations of patches in the source domain by discovering multiple modes of patch-wise output distribution through the construction of a clustered space. With such representations as guidance, we use an adversarial learning scheme to push the feature representations of target patches in the clustered space closer to the distributions of source patches. In addition, we show that our framework is complementary to existing domain adaptation techniques and achieves consistent improvements on semantic segmentation. Extensive ablations and results are demonstrated on numerous benchmark datasets with various settings, such as synthetic-to-real and cross-city scenarios.

Learning random-walk label propagation for weakly-supervised semantic segmentation

Large-scale training for semantic segmentation is challenging due to the expense of obtaining training data for this task relative to other vision tasks. We propose a novel training approach to address this difficulty. Given cheaply-obtained sparse image labelings, we propagate the sparse labels to produce guessed dense labelings. A standard CNN-based segmentation network is trained to mimic these labelings. The label-propagation process is defined via random-walk hitting probabilities, which leads to a differentiable parameterization with uncertainty estimates that are incorporated into our loss. We show that by learning the label-propagator jointly with the segmentation predictor, we are able to effectively learn semantic edges given no direct edge supervision. Experiments also show that training a segmentation network in this way outperforms the naive approach.