Abstract
Great progress has been achieved in domain adaptation in decades. Existing works are always based on an ideal assumption that testing target domains are independent and identically distributed with training target domains. However, due to unpredictable corruptions (e.g., noise and blur) in real data, such as web images and real-world object detection, domain adaptation methods are increasingly required to be corruption robust on target domains. We investigate a new task, corruption-agnostic robust domain adaptation (CRDA), to be accurate on original data and robust against unavailable-for-training corruptions on target domains. This task is non-trivial due to the large domain discrepancy and unsupervised target domains. We observe that simple combinations of popular methods of domain adaptation and corruption robustness have suboptimal CRDA results. We propose a new approach based on two technical insights into CRDA, as follows: (1) an easy-to-plug module called domain discrepancy generator (DDG) that generates samples that enlarge domain discrepancy to mimic unpredictable corruptions; (2) a simple but effective teacher-student scheme with contrastive loss to enhance the constraints on target domains. Experiments verify that DDG maintains or even improves its performance on original data and achieves better corruption robustness than baselines. Our code is available at: https://github.com/YifanXu74/CRDA.
- [1] . 2006. Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics 22, 14 (2006), e49–e57.Google Scholar
Digital Library
- [2] . 2019. Joint domain alignment and discriminative feature learning for unsupervised deep domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 3296–3303.Google Scholar
Digital Library
- [3] . 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning. 1597–1607.Google Scholar
- [4] . 2020. Adversarial robustness: From self-supervised pre-training to fine-tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 699–708.Google Scholar
Cross Ref
- [5] . 2018. Domain adaptive faster R-CNN for object detection in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3339–3348.Google Scholar
Cross Ref
- [6] . 2020. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 702–703.Google Scholar
Cross Ref
- [7] . 2017. Quality resilient deep neural networks. arXiv:1703.08119. Retrieved from https://arxiv.org/abs/1703.08119.Google Scholar
- [8] . 2017. A study and comparison of human and deep learning recognition performance under visual distortions. In Proceedings of the International Conference on Computer Communication and Networks (ICCCN). IEEE, 1–7.Google Scholar
Cross Ref
- [9] . 2016. Domain-adversarial training of neural networks. The Journal of Machine Learning Research 17, 1 (2016), 2096–2030.Google Scholar
Cross Ref
- [10] . 2018. Generalisation in humans and deep neural networks. In Proceedings of the Advances in Neural Information Processing Systems. 7538–7550.Google Scholar
- [11] . 2014. Generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems. 2672–2680.Google Scholar
- [12] . 2011. Domain adaptation for object recognition: An unsupervised approach. In Proceedings of the International Conference on Computer Vision. IEEE, 999–1006.Google Scholar
Digital Library
- [13] . 2020. Towards accurate and robust domain adaptation under noisy environments. In Proceedings of the 29th International Joint Conference on Artificial Intelligence. 2269–2276.Google Scholar
Cross Ref
- [14] . 2021. The many faces of robustness: A critical analysis of out-of-distribution generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision.Google Scholar
Cross Ref
- [15] . 2019. Benchmarking neural network robustness to common corruptions and perturbations. In Proceedings of the International Conference on Learning Representations.Google Scholar
- [16] . 2019. Using self-supervised learning can improve model robustness and uncertainty. In Proceedings of the Advances in Neural Information Processing Systems.Google Scholar
- [17] . 2020. AugMix: A simple data processing method to improve robustness and uncertainty. In Proceedings of the International Conference on Learning Representations.Google Scholar
- [18] . 2021. Natural adversarial examples. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 15262–15271.Google Scholar
Cross Ref
- [19] . 2017. Google’s cloud vision api is not robust to noise. In Proceedings of the IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 101–105.Google Scholar
Cross Ref
- [20] . 2020. Self-challenging improves cross-domain generalization. In Proceedings of the European Conference on Computer Vision. Springer, 124–140.Google Scholar
Digital Library
- [21] . 2019. Testing robustness against unforeseen adversaries. arXiv:1908.08016. Retrieved from https://arxiv.org/abs/1908.08016.Google Scholar
- [22] . 2020. Model adaptation: Unsupervised domain adaptation without source data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9641–9650.Google Scholar
Cross Ref
- [23] . 2020. Domain conditioned adaptation network. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 11386–11393.Google Scholar
Cross Ref
- [24] . 2020. Do we really need to access the source data? Source hypothesis transfer for unsupervised domain adaptation. In Proceedings of the International Conference on Machine Learning. PMLR, 6028–6039.Google Scholar
- [25] . 2021. An end-to-end convolutional network for joint detecting and denoising adversarial perturbations in vehicle classification. Computational Visual Media 7, 2 (2021), 217–227.Google Scholar
Cross Ref
- [26] . 2015. Learning transferable features with deep adaptation networks. In Proceedings of the International Conference on Machine Learning. PMLR, 97–105.Google Scholar
- [27] . 2018. Conditional adversarial domain adaptation. In Proceedings of the Advances in Neural Information Processing Systems. 1645–1655.Google Scholar
- [28] . 2019. Improving robustness without sacrificing accuracy with patch gaussian augmentation. arXiv:1906.02611. Retrieved from https://arxiv.org/abs/1906.02611.Google Scholar
- [29] . 2018. Towards deep learning models resistant to adversarial attacks. In Proceedings of the International Conference on Learning Representations.Google Scholar
- [30] . 2020. Domain generalization using a mixture of multiple latent domains. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 11749–11756.Google Scholar
Cross Ref
- [31] . 2010. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks 22, 2 (2010), 199–210.Google Scholar
Digital Library
- [32] . 2009. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering 22, 10 (2009), 1345–1359.Google Scholar
Digital Library
- [33] . 2010. Adapting visual category models to new domains. In Proceedings of the European Conference on Computer Vision. Springer, 213–226.Google Scholar
Cross Ref
- [34] . 2017. Asymmetric tri-training for unsupervised domain adaptation. In Proceedings of the International Conference on Machine Learning. 2988–2997.Google Scholar
- [35] . 2021. On evolving attention towards domain adaptation. arXiv:2103.13561. Retrieved from https://arxiv.org/abs/2103.13561.Google Scholar
- [36] . 2019. A survey on image data augmentation for deep learning. Journal of Big Data 6, 1 (2019), 60.Google Scholar
Cross Ref
- [37] . 2018. A DIRT-T approach to unsupervised domain adaptation. In Proceedings of the International Conference on Learning Representations.Google Scholar
- [38] . 2020. Test-time training with self-supervision for generalization under distribution shifts. In Proceedings of the International Conference on Machine Learning. PMLR, 9229–9248.Google Scholar
- [39] . 2018. Learning to adapt structured output space for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7472–7481.Google Scholar
Cross Ref
- [40] . 2016. Examining the impact of blur on recognition by convolutional networks. arXiv:1611.05760. Retrieved from https://arxiv.org/abs/1611.05760.Google Scholar
- [41] . 2017. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5018–5027.Google Scholar
Cross Ref
- [42] . 2018. Generalizing to unseen domains via adversarial data augmentation. In Proceedings of the Advances in Neural Information Processing Systems. 5334–5344.Google Scholar
- [43] . 2019. Transferable normalization: Towards improving transferability of deep neural networks. In Proceedings of the Advances in Neural Information Processing Systems.Google Scholar
- [44] . 2022. Transformers in computational visual media: A survey. Computational Visual Media 8, 1 (2022), 33–62.Google Scholar
Cross Ref
- [45] . 2019. CutMix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 6023–6032.Google Scholar
Cross Ref
- [46] . 2018. mixup: Beyond empirical risk minimization. In Proceedings of the International Conference on Learning Representations.Google Scholar
- [47] . 2018. Collaborative and adversarial network for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3801–3809.Google Scholar
Cross Ref
- [48] . 2019. Domain-symmetric networks for adversarial domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5031–5040.Google Scholar
Cross Ref
Index Terms
Towards Corruption-Agnostic Robust Domain Adaptation
Recommendations
Cross-domain feature enhancement for unsupervised domain adaptation
AbstractTill the present, the domain adaptation has been widely researched by transferring the knowledge from a labeled source domain to an unlabeled target domain. Adversarial adaptation methods have achieved great success, learning domain-invariant ...
Domain Adaptation for Face Recognition: Targetize Source Domain Bridged by Common Subspace
In many applications, a face recognition model learned on a source domain but applied to a novel target domain degenerates even significantly due to the mismatch between the two domains. Aiming at learning a better face recognition model for the target ...
A survey of multi-source domain adaptation
Theoretical developments on multi-source domain adaptation are reviewed.Well developed algorithms on multi-source domain adaptation are reviewed and categorized.Performance measurements and benchmark data for multi-source domain adaptation are ...






Comments