Publication Date: 6/4/2023
Event: 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2023)
Reference: 99. 1-5, 2023
Authors: Mengqun Jin, Tsinghua University; Kai Li, NEC Laboratories America, Inc.; Shuyan Li, Tsinghua University; Chunming He, Tsinghua University; Xiu Li, Tsinghua University
Abstract: Semi-Supervised Domain Adaptation (SSDA) is a recently emerging research topic that extends from the widely-investigated Unsupervised Domain Adaptation (UDA) by further having a few target samples labeled, i.e., the model is trained with labeled source samples, unlabeled target samples as well as a few labeled target samples. Compared with UDA, the key to SSDA lies how to most effectively utilize the few labeled target samples. Existing SSDA approaches simply merge the few precious labeled target samples into vast labeled source samples or further align them, which dilutes the value of labeled target samples and thus still obtains a biased model. To remedy this, in this paper, we propose to decouple SSDA as an UDA problem and a semi-supervised learning problem where we first learn an UDA model using labeled source and unlabeled target samples and then adapt the learned UDA model in a semi-supervised way using labeled and unlabeled target samples. By utilizing the labeled source samples and target samples separately, the bias problem can be well mitigated. We further propose a consistency learning based mean teacher model to effectively adapt the learned UDA model using labeled and unlabeled target samples. Experiments show our approach outperforms existing methods.
Publication Link: https://ieeexplore.ieee.org/document/10094776