skip to main content
research-article

Rectified Meta-learning from Noisy Labels for Robust Image-based Plant Disease Classification

Authors Info & Claims
Published:25 January 2022Publication History
Skip Abstract Section

Abstract

Plant diseases serve as one of main threats to food security and crop production. It is thus valuable to exploit recent advances of artificial intelligence to assist plant disease diagnosis. One popular approach is to transform this problem as a leaf image classification task, which can be then addressed by the powerful convolutional neural networks (CNNs). However, the performance of CNN-based classification approach depends on a large amount of high-quality manually labeled training data, which inevitably introduce noise on labels in practice, leading to model overfitting and performance degradation. To overcome this problem, we propose a novel framework that incorporates rectified meta-learning module into common CNN paradigm to train a noise-robust deep network without using extra supervision information. The proposed method enjoys the following merits: (i) A rectified meta-learning is designed to pay more attention to unbiased samples, leading to accelerated convergence and improved classification accuracy. (ii) Our method is free on assumption of label noise distribution, which works well on various kinds of noise. (iii) Our method serves as a plug-and-play module, which can be embedded into any deep models optimized by gradient descent-based method. Extensive experiments are conducted to demonstrate the superior performance of our algorithm over the state-of-the-arts.

REFERENCES

  1. [1] Bartunov Matthew Botvinick, Daan Wierstra, Adam Santoro, Sergey, and Lillicrap Timothy. 2016. Metalearning with memory-augmented neural networks. In IEEE International Conference on Machine Learning. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Andrychowicz Misha Denil, Sergio Gomez Colmenarejo, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Marcin, and Freitas Nando De. 2016. Learning to learn by gradient descent by gradient descent. In Conference on Advances in Neural Information Processing Systems. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Barbedo Jayme G. A.. 2018. Factors influencing the use of deep learning for plant disease recognition. Biosyst. Eng. 172 (2018), 8491.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Richard N. Strange and Peter R. Scott. 2005. Plant disease: A threat to global food security. Ann. Rev. Phytopathol. 43, 1 (2005), 83116.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Chen Yutian, Hoffman Matthew W., Colmenarejo Sergio Gomez, Denil Misha, Lillicrap Timothy P., Botvinick Matt, and Freitas Nando de. 2017. Learning to learn without gradient descent by gradient descent. Proc. Mach. Learn. Res. 70 (2017), 748756. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. [6] Christiane Lemke, Marcin Budka, Bogdan and Gabrys. 2015. Metalearning: A survey of trends and technologies. Artif. Intell. Rev. 44, 1 (2015), 117130. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Hendrycks Dan, Mazeika Mantas, Wilson Duncan, and Gimpel Kevin. 2018. Using trusted data to train deep networks on labels corrupted by severe noise. In Conference and Workshop on Neural Information Processing Systems (NeurIPS). Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Maclaurin Dougal, Duvenaud David, and Adams Ryan. 2015. Gradient-based hyperparameter optimization through reversible learning. In 32nd International Conference on Machine Learning. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Ferentinos. Konstantinos P.2018. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agricult. 145 (2018), 311318.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Finn Chelsea, Abbeel Pieter, and Levine Sergey. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In IEEE International Conference on Machine Learning (ICML). Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Guan Wang, Yu Sun, and Jianxi Wang. 2017. Automatic image-based plant disease severity estimation using deep learning. Comput. Intell. Neurosci. 2017 (2017), 18.Google ScholarGoogle Scholar
  12. [12] Han Jiangfan, Luo Ping, and Wang Xiaogang. 2019. Deep self-learning from noisy labels. In IEEE International Conference on Computer Vision (ICCV) (2019).Google ScholarGoogle Scholar
  13. [13] Harvey Celia A., Rakotobe Zo Lalaina, Rao Nalini S., Dave Radhika, and Mackinnon James L.. 2014. Extreme vulnerability of smallholder farmers to agricultural risks and climate change in madagascar. Philos. Trans. Roy. Societ. Lond. 369, 1639 (2014), 20130089.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Sepp Younger A. Steven, Hochreiter, and Conwell Peter R.. 2001. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Hughes David. P. and Salathe Marcel. 2015. An open access repository of images on plant health to enable the development of mobile disease diagnostics through machine learning and crowdsourcing. Comput. Sci. (2015).Google ScholarGoogle Scholar
  16. [16] Jacob Goldberger and Ehud Ben-Reuven. 2016. Training deep neural-networks using a noise adaptation layer. In IEEE International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  17. [17] Snell Kevin Swersky, Jake, and Zemel Richard. 2017. Prototypical networks for few-shot learning. In Conference and Workshop on Neural Information Processing Systems (NeurIPS). Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Kaur Rajleen and Kang Sandeep Singh. 2016. An enhancement in classifier support vector machine to improve plant disease detection. In IEEE International Conference on MOOCs, Innovation and Technology in Education. 135140.Google ScholarGoogle Scholar
  19. [19] Khirade Sachin D. and Patil A. B.. 2015. Plant disease detection using image processing. In International Conference on Computing Communication Control and Automation. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Kim Youngdong, Yim Junho, Yun Juseung, and Kim Junmo. 2019. NLNL: Negative learning for noisy labels. In IEEE/CVF International Conference on Computer Vision (ICCV). 101110.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Krizhevsky Alex and Hinton Geoffrey. 2009. Learning Multiple Layers of Features from Tiny Images. Master’s Thesis. University of Toronto.Google ScholarGoogle Scholar
  22. [22] Kumar M. Pawan, Packer Benjamin, and Koller Daphne. 2010. Self-paced learning for latent variable models. In Conference and Workshop on Neural Information Processing Systems (NeurIPS). Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Lee Kuang-Huei, He Xiaodong, Zhang Lei, and Yang Linjun. 2018. CleanNet: Transfer learning for scalable image classifier training with label noise. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Li Junnan, Wong Yongkang, Zhao Qi, and Kankanhalli Mohan. 2019. Learning to learn from noisy labeled data. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 50515059.Google ScholarGoogle Scholar
  25. [25] Li Niu, Tang Qingtao, Veeraraghavan Ashok, and Sabharwal Ashu. 2018. Learning from noisy web data with category level supervision. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  26. [26] Li Yuncheng, Yang Jianchao, Song Yale, Cao Liangliang, Luo Jiebo, and Li Li Jia. 2017. Learning from noisy labels with distillation. In IEEE International Conference on Computer Vision (ICCV). 19281936.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Lin Tsung Yi, Goyal Priya, Girshick Ross, He Kaiming, and Dollár Piotr. 2017. Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell. PP, 99 (2017), 29993007.Google ScholarGoogle Scholar
  28. [28] Liu Xin, Li Shaoxin, Kan Meina, Shan Shiguang, and Chen Xilin. 2017. Self-error-correcting convolutional neural network for learning with noisy labels. In 12th IEEE International Conference on Automatic Face Gesture Recognition (FG’17). 111117.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Malisiewicz Tomasz, Gupta Abhinav, and Efros Alexei A.. 2011. Ensemble of exemplar-SVMs for object detection and beyond. In IEEE International Conference on Computer Vision (ICCV). Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Mohanty Sharada P., Hughes David P., and Salathé Marcel. 2016. Using deep learning for image-based plant disease detection. Front. Plant Sci. 7 (2016), 1419.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] al Timothy Lillicrap, Daan Wierstra, Oriol Vinyals, Charles Blundell, et. 2016. Matching networks for one shot learning. In Conference and Workshop on Neural Information Processing Systems (NeurIPS). Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Patrini Giorgio, Rozza Alessandro, Menon Aditya, Nock Richard, and Qu Lizhen. 2017. Making deep neural networks robust to label noise: A loss correction approach. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 19441952.Google ScholarGoogle Scholar
  33. [33] Scott, Reed Honglak E., Lee Dragomir, Anguelov Christian, Szegedy Dumitru, Erhan, and Rabinovich Andrew. 2015. Training deep convolutional networks on noisy labels with bootstrapping. In IEEE International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  34. [34] Sheng Guo, Huang Weilin, Zhang Haozhi, Zhuang Chenfan, and Huang Dinglong. 2018. CurriculumNet: Weakly supervised learning from large-scale web images. In European Conference on Computer Vision.Google ScholarGoogle Scholar
  35. [35] Sukhbaatar Sainbayar, Bruna Joan, Paluri Manohar, Bourdev Lubomir, and Fergus Rob. 2014. Training convolutional networks with noisy labels. In IEEE International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  36. [36] Tanaka Daiki, Ikami Daiki, Yamasaki Toshihiko, and Aizawa Kiyoharu. 2018. Joint optimization framework for learning with noisy labels. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 55525560.Google ScholarGoogle Scholar
  37. [37] Veit Andreas, Alldrin Neil, Chechik Gal, Krasin Ivan, Gupta Abhinav, and Belongie Serge. 2017. Learning from noisy large-scale datasets with minimal supervision. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  38. [38] Vilalta Ricardo and Drissi Youssef. 2001. A perspective view and survey of meta-learning. Artif. Intell. Rev. 18, 2 (2001). Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Waghmare Harshal, Kokare Radha, and Dandawate Yogesh. 2016. Detection and classification of diseases of grape plant using opposite colour local binary pattern feature and machine learning for automated decision support system. In International Conference on Signal Processing and Integrated Networks. IEEE.Google ScholarGoogle Scholar
  40. [40] X. Kurth-Nelson, Zeb Tirumala, Dhruva Soyer, Hubert Leibo, Joel Z. Munos, Rémi Blundell, Charles Kumaran, Dharshan Wang, Jane, and Botvinick Matthew. 2016. Learning to reinforcement learn. In Annual Meeting of the Cognitive Science Society.Google ScholarGoogle Scholar
  41. [41] Wang Yisen, Liu Weiyang, Ma Xingjun, Bailey James, Zha Hongyuan, Song Le, and Xia Shu-Tao. 2018. Iterative learning with open-set noisy labels. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 86888696.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Wang Yisen, Ma Xingjun, Chen Zaiyi, Luo Yuan, Yi Jinfeng, and Bailey James. 2019. Symmetric cross entropy for robust learning with noisy labels. In IEEE International Conference on Computer Vision.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Xiaobo Wang, Shuo Wang, Jun Wang, Hailin Shi, and Tao Mei. 2019. Co-mining: Deep face recognition with noisy labels. In IEEE International Conference on Computer Vision (ICCV).Google ScholarGoogle Scholar
  44. [44] Yi Kun and Wu Jianxin. 2019. Probabilistic end-to-end noise correction for learning with noisy labels. CoRR (2019). arxiv:1903.07788.Google ScholarGoogle Scholar
  45. [45] Zhang Zhilu and Sabuncu Mert R.. 2018. Generalized cross entropy loss for training deep neural networks with noisy labels. In Conference and Workshop on Neural Information Processing Systems (NeurIPS).Google ScholarGoogle Scholar
  46. [46] Zhou Fei Chen Zhenguo Li, Fengwei, and Li Hang. 2017. Meta-SGD: Learning to learn quickly for few shot learning. arXiv:1707.09835 (2017).Google ScholarGoogle Scholar

Index Terms

  1. Rectified Meta-learning from Noisy Labels for Robust Image-based Plant Disease Classification

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Multimedia Computing, Communications, and Applications
        ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 1s
        February 2022
        352 pages
        ISSN:1551-6857
        EISSN:1551-6865
        DOI:10.1145/3505206
        Issue’s Table of Contents

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 25 January 2022
        • Accepted: 1 June 2021
        • Revised: 1 May 2021
        • Received: 1 January 2021
        Published in tomm Volume 18, Issue 1s

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      View Full Text

      HTML Format

      View this article in HTML Format .

      View HTML Format
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!