Publication Date: 6/19/2021
Event: CVPR 2021, Virtual
Authors: Masoud Faraki, NEC Laboratories America, Inc.; Xiang Yu, NEC Laboratories America, Inc.; Yi-Hsuan Tsai, NEC Laboratories America, Inc.; Yumin Suh, NEC Laboratories America, Inc.; Manmohan Chandraker, NEC Laboratories America, Inc., University of California, San Diego
Abstract: Face recognition models trained under the assumption of identical training and test distributions often suffer from poor generalization when faced with unknown variations, such as a novel ethnicity or unpredictable individual make-ups during test time. In this paper, we introduce a novel cross-domain metric learning loss, which we dub Cross-Domain Triplet (CDT) loss, to improve face recognition in unseen domains. The CDT loss encourages learning semantically meaningful features by enforcing compact feature clusters of identities from one domain, where the compactness is measured by underlying similarity metrics that belong to another training domain with different statistics. Intuitively, it discriminatively correlates explicit metrics derived from one domain, with triplet samples from another domain in a unified loss function to be minimized within a network, which leads to better alignment of the training domains. The network parameters are further enforced to learn generalized features under domain shift, in a model-agnostic learning pipeline. Unlike the recent work of Meta Face Recognition , our method does not require careful hard-pair sample mining and filtering strategy during training. Extensive experiments on various face recognition benchmarks show the superiority of our method in handling variations, compared to baseline and the state-of-the-art methods.
Publication Link: https://ieeexplore.ieee.org/document/9578211