skip to main content
research-article

Towards Corruption-Agnostic Robust Domain Adaptation

Authors Info & Claims
Published:04 March 2022Publication History
Skip Abstract Section

Abstract

Great progress has been achieved in domain adaptation in decades. Existing works are always based on an ideal assumption that testing target domains are independent and identically distributed with training target domains. However, due to unpredictable corruptions (e.g., noise and blur) in real data, such as web images and real-world object detection, domain adaptation methods are increasingly required to be corruption robust on target domains. We investigate a new task, corruption-agnostic robust domain adaptation (CRDA), to be accurate on original data and robust against unavailable-for-training corruptions on target domains. This task is non-trivial due to the large domain discrepancy and unsupervised target domains. We observe that simple combinations of popular methods of domain adaptation and corruption robustness have suboptimal CRDA results. We propose a new approach based on two technical insights into CRDA, as follows: (1) an easy-to-plug module called domain discrepancy generator (DDG) that generates samples that enlarge domain discrepancy to mimic unpredictable corruptions; (2) a simple but effective teacher-student scheme with contrastive loss to enhance the constraints on target domains. Experiments verify that DDG maintains or even improves its performance on original data and achieves better corruption robustness than baselines. Our code is available at: https://github.com/YifanXu74/CRDA.

REFERENCES

  1. [1] Borgwardt Karsten M., Gretton Arthur, Rasch Malte J., Kriegel Hans-Peter, Schölkopf Bernhard, and Smola Alex J.. 2006. Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics 22, 14 (2006), e49–e57.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Chen Chao, Chen Zhihong, Jiang Boyuan, and Jin Xinyu. 2019. Joint domain alignment and discriminative feature learning for unsupervised deep domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 32963303.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Chen Ting, Kornblith Simon, Norouzi Mohammad, and Hinton Geoffrey. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning. 15971607.Google ScholarGoogle Scholar
  4. [4] Chen Tianlong, Liu Sijia, Chang Shiyu, Cheng Yu, Amini Lisa, and Wang Zhangyang. 2020. Adversarial robustness: From self-supervised pre-training to fine-tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 699708.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Chen Yuhua, Li Wen, Sakaridis Christos, Dai Dengxin, and Gool Luc Van. 2018. Domain adaptive faster R-CNN for object detection in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 33393348.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Cubuk Ekin D., Zoph Barret, Shlens Jonathon, and Le Quoc V.. 2020. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 702703.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Dodge Samuel and Karam Lina. 2017. Quality resilient deep neural networks. arXiv:1703.08119. Retrieved from https://arxiv.org/abs/1703.08119.Google ScholarGoogle Scholar
  8. [8] Dodge Samuel and Karam Lina. 2017. A study and comparison of human and deep learning recognition performance under visual distortions. In Proceedings of the International Conference on Computer Communication and Networks (ICCCN). IEEE, 17.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Ganin Yaroslav, Ustinova Evgeniya, Ajakan Hana, Germain Pascal, Larochelle Hugo, Laviolette François, Marchand Mario, and Lempitsky Victor. 2016. Domain-adversarial training of neural networks. The Journal of Machine Learning Research 17, 1 (2016), 2096–2030.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Geirhos Robert, Temme Carlos R. M., Rauber Jonas, Schütt Heiko H., Bethge Matthias, and Wichmann Felix A.. 2018. Generalisation in humans and deep neural networks. In Proceedings of the Advances in Neural Information Processing Systems. 75387550.Google ScholarGoogle Scholar
  11. [11] Goodfellow Ian, Pouget-Abadie Jean, Mirza Mehdi, Xu Bing, Warde-Farley David, Ozair Sherjil, Courville Aaron, and Bengio Yoshua. 2014. Generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems. 26722680.Google ScholarGoogle Scholar
  12. [12] Gopalan Raghuraman, Li Ruonan, and Chellappa Rama. 2011. Domain adaptation for object recognition: An unsupervised approach. In Proceedings of the International Conference on Computer Vision. IEEE, 9991006.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Han Zhongyi, Gui Xian-Jin, Cui Chaoran, and Yin Yilong. 2020. Towards accurate and robust domain adaptation under noisy environments. In Proceedings of the 29th International Joint Conference on Artificial Intelligence. 22692276.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Hendrycks Dan, Basart Steven, Mu Norman, Kadavath Saurav, Wang Frank, Dorundo Evan, Desai Rahul, Zhu Tyler, Parajuli Samyak, Guo Mike, Song Dawn, Steinhardt Jacob, and Gilmer Justin. 2021. The many faces of robustness: A critical analysis of out-of-distribution generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Hendrycks Dan and Dietterich Thomas. 2019. Benchmarking neural network robustness to common corruptions and perturbations. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar
  16. [16] Hendrycks Dan, Mazeika Mantas, Kadavath Saurav, and Song Dawn. 2019. Using self-supervised learning can improve model robustness and uncertainty. In Proceedings of the Advances in Neural Information Processing Systems.Google ScholarGoogle Scholar
  17. [17] Hendrycks Dan, Mu Norman, Cubuk Ekin D., Zoph Barret, Gilmer Justin, and Lakshminarayanan Balaji. 2020. AugMix: A simple data processing method to improve robustness and uncertainty. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar
  18. [18] Hendrycks Dan, Zhao Kevin, Basart Steven, Steinhardt Jacob, and Song Dawn. 2021. Natural adversarial examples. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1526215271.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Hosseini Hossein, Xiao Baicen, and Poovendran Radha. 2017. Google’s cloud vision api is not robust to noise. In Proceedings of the IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 101105.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Huang Zeyi, Wang Haohan, Xing Eric P., and Huang Dong. 2020. Self-challenging improves cross-domain generalization. In Proceedings of the European Conference on Computer Vision. Springer, 124140.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Kang Daniel, Sun Yi, Hendrycks Dan, Brown Tom, and Steinhardt Jacob. 2019. Testing robustness against unforeseen adversaries. arXiv:1908.08016. Retrieved from https://arxiv.org/abs/1908.08016.Google ScholarGoogle Scholar
  22. [22] Li Rui, Jiao Qianfen, Cao Wenming, Wong Hau-San, and Wu Si. 2020. Model adaptation: Unsupervised domain adaptation without source data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 96419650.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Li Shuang, Liu Chi, Lin Qiuxia, Xie Binhui, Ding Zhengming, Huang Gao, and Tang Jian. 2020. Domain conditioned adaptation network. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 1138611393.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Liang Jian, Hu Dapeng, and Feng Jiashi. 2020. Do we really need to access the source data? Source hypothesis transfer for unsupervised domain adaptation. In Proceedings of the International Conference on Machine Learning. PMLR, 60286039.Google ScholarGoogle Scholar
  25. [25] Liu Peng, Fu Huiyuan, and Ma Huadong. 2021. An end-to-end convolutional network for joint detecting and denoising adversarial perturbations in vehicle classification. Computational Visual Media 7, 2 (2021), 217227.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Long Mingsheng, Cao Yue, Wang Jianmin, and Jordan Michael. 2015. Learning transferable features with deep adaptation networks. In Proceedings of the International Conference on Machine Learning. PMLR, 97105.Google ScholarGoogle Scholar
  27. [27] Long Mingsheng, Cao Zhangjie, Wang Jianmin, and Jordan Michael I.. 2018. Conditional adversarial domain adaptation. In Proceedings of the Advances in Neural Information Processing Systems. 16451655.Google ScholarGoogle Scholar
  28. [28] Lopes Raphael Gontijo, Yin Dong, Poole Ben, Gilmer Justin, and Cubuk Ekin D.. 2019. Improving robustness without sacrificing accuracy with patch gaussian augmentation. arXiv:1906.02611. Retrieved from https://arxiv.org/abs/1906.02611.Google ScholarGoogle Scholar
  29. [29] Madry Aleksander, Makelov Aleksandar, Schmidt Ludwig, Tsipras Dimitris, and Vladu Adrian. 2018. Towards deep learning models resistant to adversarial attacks. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar
  30. [30] Matsuura Toshihiko and Harada Tatsuya. 2020. Domain generalization using a mixture of multiple latent domains. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 1174911756.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Pan Sinno Jialin, Tsang Ivor W., Kwok James T., and Yang Qiang. 2010. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks 22, 2 (2010), 199210.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Pan Sinno Jialin and Yang Qiang. 2009. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering 22, 10 (2009), 13451359.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. [33] Saenko Kate, Kulis Brian, Fritz Mario, and Darrell Trevor. 2010. Adapting visual category models to new domains. In Proceedings of the European Conference on Computer Vision. Springer, 213226.Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Saito Kuniaki, Ushiku Yoshitaka, and Harada Tatsuya. 2017. Asymmetric tri-training for unsupervised domain adaptation. In Proceedings of the International Conference on Machine Learning. 29882997.Google ScholarGoogle Scholar
  35. [35] Sheng Kekai, Li Ke, Zheng Xiawu, Liang Jian, Dong Weiming, Huang Feiyue, Ji Rongrong, and Sun Xing. 2021. On evolving attention towards domain adaptation. arXiv:2103.13561. Retrieved from https://arxiv.org/abs/2103.13561.Google ScholarGoogle Scholar
  36. [36] Shorten Connor and Khoshgoftaar Taghi M.. 2019. A survey on image data augmentation for deep learning. Journal of Big Data 6, 1 (2019), 60.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Shu Rui, Bui Hung, Narui Hirokazu, and Ermon Stefano. 2018. A DIRT-T approach to unsupervised domain adaptation. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar
  38. [38] Sun Yu, Wang Xiaolong, Liu Zhuang, Miller John, Efros Alexei, and Hardt Moritz. 2020. Test-time training with self-supervision for generalization under distribution shifts. In Proceedings of the International Conference on Machine Learning. PMLR, 92299248.Google ScholarGoogle Scholar
  39. [39] Tsai Yi-Hsuan, Hung Wei-Chih, Schulter Samuel, Sohn Kihyuk, Yang Ming-Hsuan, and Chandraker Manmohan. 2018. Learning to adapt structured output space for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 74727481.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Vasiljevic Igor, Chakrabarti Ayan, and Shakhnarovich Gregory. 2016. Examining the impact of blur on recognition by convolutional networks. arXiv:1611.05760. Retrieved from https://arxiv.org/abs/1611.05760.Google ScholarGoogle Scholar
  41. [41] Venkateswara Hemanth, Eusebio Jose, Chakraborty Shayok, and Panchanathan Sethuraman. 2017. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 50185027.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Volpi Riccardo, Namkoong Hongseok, Sener Ozan, Duchi John C., Murino Vittorio, and Savarese Silvio. 2018. Generalizing to unseen domains via adversarial data augmentation. In Proceedings of the Advances in Neural Information Processing Systems. 53345344.Google ScholarGoogle Scholar
  43. [43] Wang Ximei, Jin Ying, Long Mingsheng, Wang Jianmin, and Jordan Michael I.. 2019. Transferable normalization: Towards improving transferability of deep neural networks. In Proceedings of the Advances in Neural Information Processing Systems.Google ScholarGoogle Scholar
  44. [44] Xu Yifan, Wei Huapeng, Lin Minxuan, Deng Yingying, Sheng Kekai, Zhang Mengdan, Tang Fan, Dong Weiming, Huang Feiyue, and Xu Changsheng. 2022. Transformers in computational visual media: A survey. Computational Visual Media 8, 1 (2022), 3362.Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Yun Sangdoo, Han Dongyoon, Oh Seong Joon, Chun Sanghyuk, Choe Junsuk, and Yoo Youngjoon. 2019. CutMix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 60236032.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Zhang Hongyi, Cisse Moustapha, Dauphin Yann N., and Lopez-Paz David. 2018. mixup: Beyond empirical risk minimization. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar
  47. [47] Zhang Weichen, Ouyang Wanli, Li Wen, and Xu Dong. 2018. Collaborative and adversarial network for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 38013809.Google ScholarGoogle ScholarCross RefCross Ref
  48. [48] Zhang Yabin, Tang Hui, Jia Kui, and Tan Mingkui. 2019. Domain-symmetric networks for adversarial domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 50315040.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Towards Corruption-Agnostic Robust Domain Adaptation

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Multimedia Computing, Communications, and Applications
            ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 4
            November 2022
            497 pages
            ISSN:1551-6857
            EISSN:1551-6865
            DOI:10.1145/3514185
            • Editor:
            • Abdulmotaleb El Saddik
            Issue’s Table of Contents

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 4 March 2022
            • Accepted: 1 November 2021
            • Revised: 1 August 2021
            • Received: 1 April 2021
            Published in tomm Volume 18, Issue 4

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Refereed

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Full Text

          View this article in Full Text.

          View Full Text

          HTML Format

          View this article in HTML Format .

          View HTML Format
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!