Active Adversarial Domain Adaptation

Publication Date: 3/2/2020

Event: WACV 2020, Snowmass Village, CO USA

Reference: pp 728-737, 2020

Authors: Jong-Chyi Su, University of Massachusetts, Amherst, NEC Laboratories America, Inc.; Yi-Hsuan Tsai, NEC Laboratories America, Inc.; Kihyuk Sohn, NEC Laboratories America, Inc.; Buyu Liu, NEC Laboratories America, Inc.; Subhransu Maji, University of Massachusetts, Amherst; Manmohan Chandraker, NEC Laboratories America, Inc., UC San Diego

Abstract: We propose an active learning approach for transferring representations across domains. Our approach, active adversarial domain adaptation (AADA), explores a duality between two related problems: adversarial domain alignment and importance sampling for adapting models across domains. The former uses a domain discriminative model to align domains, while the latter utilizes the model to weigh samples to account for distribution shifts. Specifically, our importance weight promotes unlabeled samples with large uncertainty in classification and diversity compared to labeled examples, thus serving as a sample selection scheme for active learning. We show that these two views can be unified in one framework for domain adaptation and transfer learning when the source domain has many labeled examples while the target domain does not. AADA provides significant improvements over fine-tuning based approaches and other sampling methods when the two domains are closely related. Results on challenging domain adaptation tasks such as object detection demonstrate that the advantage over baseline approaches is retained even after hundreds of examples being actively annotated.

Publication Link: https://ieeexplore.ieee.org/document/9093390

Additional Publication Link: https://arxiv.org/pdf/1904.07848v1.pdf