Abstract
Expanding visual categorization into a novel domain without the need of extra annotation has been a long-term interest for multimedia intelligence. Previously, this challenge has been approached by unsupervised domain adaptation (UDA). Given labeled data from a source domain and unlabeled data from a target domain, UDA seeks for a deep representation that is both discriminative and domain-invariant. While UDA focuses on the target domain, we argue that the performance on both source and target domains matters, as in practice which domain a test example comes from is unknown. In this article, we extend UDA by proposing a new task called unsupervised domain expansion (UDE), which aims to adapt a deep model for the target domain with its unlabeled data, meanwhile maintaining the model’s performance on the source domain. We propose Knowledge Distillation Domain Expansion (KDDE) as a general method for the UDE task. Its domain-adaptation module can be instantiated with any existing model. We develop a knowledge distillation-based learning mechanism, enabling KDDE to optimize a single objective wherein the source and target domains are equally treated. Extensive experiments on two major benchmarks, i.e., Office-Home and DomainNet, show that KDDE compares favorably against four competitive baselines, i.e., DDC, DANN, DAAN, and CDAN, for both UDA and UDE tasks. Our study also reveals that the current UDA models improve their performance on the target domain at the cost of noticeable performance loss on the source domain.
- [1] . 2017. Domain adaptation of DNN acoustic models using knowledge distillation. In Proceedings of the ICASSP.Google Scholar
Cross Ref
- [2] . 2007. Analysis of representations for domain adaptation. In Proceedings of the NeurIPS. Google Scholar
Digital Library
- [3] . 2017. Learning efficient object detection models with knowledge distillation. In Proceedings of the NeurIPS. Google Scholar
Digital Library
- [4] . 2018. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Med. 24, 9 (2018), 1342–1350.Google Scholar
Cross Ref
- [5] . 2009. Domain Transfer SVM for video concept detection. In Proceedings of the CVPR.Google Scholar
- [6] . 2018. Self-ensembling for visual domain adaptation. In Proceedings of the ICLR.Google Scholar
- [7] . 2016. Domain-Adversarial Training of Neural Networks. J. Mach. Learn. Res. 17, 59 (2016), 1–35. Google Scholar
Digital Library
- [8] . 2020. Adversarially robust distillation. In Proceedings of the AAAI.Google Scholar
Cross Ref
- [9] . 2015. Distilling the Knowledge in a Neural Network. In Proceedings of the NIPS Deep Learning Workshop.Google Scholar
- [10] . 2008. Cross-domain learning methods for high-level visual concept classification. In Proceedings of the ICIP.Google Scholar
Cross Ref
- [11] . 2019. Contrastive Adaptation Network for Unsupervised Domain Adaptation. In Proceedings of the CVPR.Google Scholar
Cross Ref
- [12] . 2018. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172, 5 (2018), 1122–1131.Google Scholar
Cross Ref
- [13] . 2019. Joint Adversarial Domain Adaptation. In Proceedings of the ACMMM. Google Scholar
Digital Library
- [14] . 2019. Structured Knowledge Distillation for Semantic Segmentation. In Proceedings of the CVPR.Google Scholar
Cross Ref
- [15] . 2015. Learning Transferable Features with Deep Adaptation Networks. In Proceedings of the ICML. Google Scholar
Digital Library
- [16] . 2018. Conditional adversarial domain adaptation. In Proceedings of the NeurIPS. Google Scholar
Digital Library
- [17] . 2017. Deep transfer learning with joint adaptation networks. In Proceedings of the ICML. Google Scholar
Digital Library
- [18] . 2008. Visualizing data using t-SNE. J. Mach. Learn. Res. 9(Nov.2008), 2579–2605.Google Scholar
- [19] . 2019. Domain adaptation via teacher-student learning for end-to-end speech recognition. In Proceedings of the ASRU.Google Scholar
Cross Ref
- [20] . 2018. Adversarial teacher-student learning for unsupervised domain adaptation. In Proceedings of the ICASSP.Google Scholar
Cross Ref
- [21] . 2020. Improved knowledge distillation via teacher assistant. In Proceedings of the AAAI.Google Scholar
Cross Ref
- [22] . 2019. PyTorch: An imperative style, high-performance deep learning library. In Proceedings of the NeurIPS. Google Scholar
Digital Library
- [23] . 2019. Moment matching for multi-source domain adaptation. In Proceedings of the ICCV.Google Scholar
Cross Ref
- [24] . 2018. Unsupervised domain adaptation with similarity learning. In Proceedings of the CVPR.Google Scholar
Cross Ref
- [25] . 2015. FitNets: Hints for thin deep nets. Proceedings of the ICLR (2015).Google Scholar
- [26] . 2019. Semi-Supervised Domain Adaptation via Minimax Entropy. In Proceedings of the ICCV.Google Scholar
Cross Ref
- [27] . 2018. Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the CVPR.Google Scholar
Cross Ref
- [28] . 2019. Distilling Knowledge From a Deep Pose Regressor Network. In Proceedings of the ICCV.Google Scholar
Cross Ref
- [29] . 2017. Grad-CAM: Visual Explanations From Deep Networks via Gradient-based Localization. In Proceedings of the ICCV.Google Scholar
Cross Ref
- [30] . 2019. Knowledge distillation for recurrent neural network language modeling with trust regularization. In Proceedings of the ICASSP.Google Scholar
Cross Ref
- [31] . 2016. Deep CORAL: Correlation Alignment for Deep Domain Adaptation. In Proceedings of the ECCV Workshop.Google Scholar
Cross Ref
- [32] . 2020. Contrastive representation distillation. In Proceedings of the ICLR.Google Scholar
- [33] . 2020. Unsupervised Domain Adaptation in Semantic Segmentation: A Review. Technologies 8, 2 (2020), 35.Google Scholar
Cross Ref
- [34] . 2017. Adversarial discriminative domain adaptation. In Proceedings of the CVPR.Google Scholar
Cross Ref
- [35] . 2014. Deep Domain Confusion: Maximizing for Domain Invariance. Retrieved from https://ArXivabs/1412.3474.Google Scholar
- [36] . 2017. Deep Hashing Network for Unsupervised Domain Adaptation. In Proceedings of the CVPR.Google Scholar
Cross Ref
- [37] . 2019. Two-Stream CNN with Loose Pair Training for Multi-modal AMD Categorization. In Proceedings of the MICCAI.Google Scholar
Digital Library
- [38] . 2019. Transferable Normalization: Towards Improving Transferability of Deep Neural Networks. In Proceedings of the NeurIPS. Google Scholar
Digital Library
- [39] . 2019. Transferable attention for domain adaptation. In Proceedings of the AAAI. Google Scholar
Digital Library
- [40] . 2007. Cross-domain video concept detection using adaptive SVMs. In Proceedings of the ACMMM. Google Scholar
Digital Library
- [41] . 2019. Confidence regularized self-training. In Proceedings of the ICCV.Google Scholar
- [42] . 2019. Transfer Learning with Dynamic Adversarial Adaptation Network. In Proceedings of the ICDM.Google Scholar
Cross Ref
- [43] . 2019. Be your own teacher: Improve the performance of convolutional neural networks via self distillation. In Proceedings of the ICCV.Google Scholar
Cross Ref
- [44] . 2019. Training Efficient Saliency Prediction Models with Knowledge Distillation. In Proceedings of the ACMMM. Google Scholar
Digital Library
- [45] . 2018. Examining CNN Representations with Respect to Dataset Bias. In Proceedings of the AAAI. Google Scholar
Digital Library
- [46] . 2018. Deep mutual learning. In Proceedings of the CVPR.Google Scholar
Cross Ref
- [47] . 2018. Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In Proceedings of the ECCV.Google Scholar
Cross Ref
Index Terms
Unsupervised Domain Expansion for Visual Categorization
Recommendations
Automatic expansion of domain-specific lexicons by term categorization
We discuss an approach to the automatic expansion of domain-specific lexicons, that is, to the problem of extending, for each ci in a predefined set C = {c1,…,cm} of semantic domains, an initial lexicon Li0 into a larger lexicon Li1. Our approach relies on term ...
Weakly-Supervised Cross-Domain Dictionary Learning for Visual Recognition
We address the visual categorization problem and present a method that utilizes weakly labeled data from other visual domains as the auxiliary source data for enhancing the original learning system. The proposed method aims to expand the intra-class ...






Comments