Gotta Adapt ’Em All: Joint Pixel and Feature-Level Domain Adaptation for Recognition in the Wild

Publication Date: 6/16/2019

Event: IEEE Computer Vision and Pattern Recognition (CVPR 2019)

Reference: pp. 2672-2681, 2019

Authors: Luan Tran, Michigan State University, NEC Laboratories America, Inc.; Kihyuk Sohn, NEC Laboratories America, Inc.; Xiang Yu, NEC Laboratories America, Inc.; Xiaoming Liu, Michigan State University; Manmohan Chandraker, NEC Laboratories America, Inc., University of California, San Diego

Abstract: Recent developments in deep domain adaptation have allowed knowledge transfer from a labeled source domain to an unlabeled target domain at the level of intermediate features or input pixels. We propose that advantages may be derived by combining them, in the form of different insights that lead to a novel design and complementary properties that result in better performance. At the feature level, inspired by insights from semi-supervised learning, we propose a classification-aware domain adversarial neural network that brings target examples into more classifiable regions of source domain. Next, we posit that computer vision insights are more amenable to injection at the pixel level. In particular, we use 3D geometry and image synthesis based on a generalized appearance flow to preserve identity across pose transformations, while using an attribute-conditioned CycleGAN to translate a single source into multiple target images that differ in lower-level properties such as lighting. Besides standard UDA benchmark, we validate on a novel and apt problem of car recognition in unlabeled surveillance images using labeled images from the web, handling explicitly specified, nameable factors of variation through pixel-level and implicit, unspecified factors through feature-level adaptation.

Publication Link: https://openaccess.thecvf.com/content_CVPR_2019/html/Tran_Gotta_Adapt_Em_All_Joint_Pixel_and_Feature-Level_Domain_Adaptation_CVPR_2019_paper.html