MEDIA ANALYTICS
PROJECTS
PEOPLE
PUBLICATIONS
PATENTS
MM-TTA: Multimodal Test-Time Adaptation for 3D Sematic Segmentation
We propose a Multi-Modal Test-Time Adaptation (MM-TTA) framework that enables a model to be quickly adapted to multi-modal test data without access to the source domain training data. We introduce two modules: 1) Intra-PG to produce reliable pseudo labels within each modality via updating two models (batch norm statistics) in different paces, i.e., slow and fast updating schemes with a momentum, and 2) Inter-PR to adaptively select pseudo-labels from the two modalities. These two modules seamlessly collaborate with each other and co-produce final cross-modal pseudo labels to help test-time adaptation.
Collaborators: Inkyu Shin Yi-Hsuan Tsaip, Bingbing Zhuang, Sparsh Garg, In So Kweon, Kuk-Jin Yoo
Paper
Buyu Liu3 Sparsh Garg3 In So Kweon1 Kuk-Jin Yoon1
@inproceedings{mmtta_cvpr_2022,
author = {Inkyu Shin and Yi-Hsuan Tsai and Bingbing Zhuang and Samuel Schulter
and Buyu Liu and Sparsh Garg and In So Kweon and Kuk-Jin Yoon},
title = {{MM-TTA}: Multi-Modal Test-Time Adaptation for 3D Semantic Segmentation},
booktitle = {CVPR},
year = 2022
}