skip to main content
research-article

Glaucoma Assessment from Fundus Images with Fundus to OCT Feature Space Mapping

Authors Info & Claims
Published:15 October 2021Publication History
Skip Abstract Section

Abstract

Early detection and treatment of glaucoma is of interest as it is a chronic eye disease leading to an irreversible loss of vision. Existing automated systems rely largely on fundus images for assessment of glaucoma due to their fast acquisition and cost-effectiveness. Optical Coherence Tomographic (OCT) images provide vital and unambiguous information about nerve fiber loss and optic cup morphology, which are essential for disease assessment. However, the high cost of OCT is a deterrent for deployment in screening at large scale. In this article, we present a novel CAD solution wherein both OCT and fundus modality images are leveraged to learn a model that can perform a mapping of fundus to OCT feature space. We show how this model can be subsequently used to detect glaucoma given an image from only one modality (fundus). The proposed model has been validated extensively on four public andtwo private datasets. It attained an AUC/Sensitivity value of 0.9429/0.9044 on a diverse set of 568 images, which is superior to the figures obtained by a model that is trained only on fundus features. Cross-validation was also done on nearly 1,600 images drawn from a private (OD-centric) and a public (macula-centric) dataset and the proposed model was found to outperform the state-of-the-art method by 8% (public) to 18% (private). Thus, we conclude that fundus to OCT feature space mapping is an attractive option for glaucoma detection.

REFERENCES

  1. [1] Chen X.al et. 2015. Automatic feature learning for glaucoma detection based on deep learning. Proc. MICCAI, 669677.Google ScholarGoogle Scholar
  2. [2] Orlando J. I.al et. 2016. Convolutional neural network transfer for automated glaucoma identification. Proc. SPIE, 12th International Symposium on Medical Information Processing and Analysis.Google ScholarGoogle Scholar
  3. [3] Chen X.al et. 2015. Glaucoma detection based on deep convolutional neural network. In EMBC, 715718.Google ScholarGoogle Scholar
  4. [4] Pal A.al et. 2018. G-Eyenet: A convolutional autoencoding classifier framework for the detection of glaucoma from retinal fundus images. In ICIP, 27752779.Google ScholarGoogle Scholar
  5. [5] Cristopher M.al et. 2018. Performance of deep learning architectures and transfer learning for detecting glaucomatous optic neuropathy in fundus photographs. Scientific Reports (2018).Google ScholarGoogle Scholar
  6. [6] Diaz-Pinto A.al et. 2019. Retinal image synthesis and semi-supervised learning for glaucoma assessment. IEEE Transactions on Medical Imaging 38, 9 (2019), 2211–2218. DOI: 10.1109/TMI.2019.2903434Google ScholarGoogle Scholar
  7. [7] Gomez-Valverde J.al et. 2019. Automatic glaucoma classification using color fundus images based on convolutional neural networks and transfer learning. Biomedical Optics Express 10, 2 (2019), 892–913.Google ScholarGoogle Scholar
  8. [8] Cheng J.al et. 2013. Superpixel classification based optic disc and optic cup segmentation for glaucoma screening. IEEE Transactions on Medical Imaging 32 (2013), 10191032.Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Tan N. M.al et. 2015. Robust multi-scale superpixel classification for optic cup localization. Computerized Medical Imaging and Graphics 40 (2015), 182–193.Google ScholarGoogle Scholar
  10. [10] Xu Y.al et. 2014. Optic cup segmentation for glaucoma detection using low-rank superpixel representation. In MICCAI, Vol 8673. 788–795.Google ScholarGoogle Scholar
  11. [11] Chakravarty A. and Sivaswamy J.. 2017. Joint optic disc and cup boundary extraction from monocular fundus images. Comput Methods Programs Biomed. 147 (2017), 5161.Google ScholarGoogle Scholar
  12. [12] Sevastopolsky A.. 2017. Optic disc and cup segmentation methods for glaucoma detection with modification of U-Net convolutional neural network. Pattern Recognition and Image Analysis 27 (2017), 618624. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Fu H.al et. 2018. Joint optic disc and cup segmentation based on multi-label deep network and polar transformation. IEEE Transactions on Medical Imaging 37 (2018), 15971605.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Chakravarty A. and Sivaswamy J.. 2016. Glaucoma classification with a fusion of segmentation and image-based features. In ISBI.Google ScholarGoogle Scholar
  15. [15] Fu H.al et. 2018. Disc-aware ensemble network for glaucoma screening from fundus image. IEEE Transactions on Medical Imaging 37 (2018), 24932501.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Kotowski J.al et. 2011. Clinical use of OCT in assessing glaucoma progression. Ophthalmic Surgery, Lasers Imaging 42 (2011).Google ScholarGoogle Scholar
  17. [17] Rajan A. and Ramesh G. P.. 2015. Automated early detection of glaucoma in wavelet domain using optical coherence tomography images. Biomedical and Pharmacology Journal 8 (2015).Google ScholarGoogle Scholar
  18. [18] Fauw J. Deal et. 2018. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Medicine 24 (2018), 13421350.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Yu Y. and Acton S. T.. 2002. Speckle reducing anisotropic diffusion. IEEE Transactions on Image Processing 11 (2002), 12601270. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Chakravarty A.al et. 2018. Construction of a retinal atlas for macular OCT volumes. In ICIAR, 650658.Google ScholarGoogle Scholar
  21. [21] Gopinath K. and Sivaswamy J.. 2018. Segmentation of retinal cysts from optical coherence tomography volumes via selective enhancement. IEEE Journal of Biomedical and Health Informatics (2018).Google ScholarGoogle Scholar
  22. [22] Foracchia M.al et. 2005. Luminosity and contrast normalization in retinal images. Medical Image Analysis (2005), 179591179598.Google ScholarGoogle Scholar
  23. [23] Joshi G. D.al et. 2010. Robust optic disk segmentation from color retinal images. In ICVGIP, 330336. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Sabour S.al et. 2017. Dynamic routing between capsules. In NIPS, Vol 30. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Jimenez-Sanchez A.al et. 2018. Capsule Networks against medical imaging data challenges. arXiv:1807.07559. https://arxiv.org/abs/1807.07559.Google ScholarGoogle Scholar
  26. [26] Afshar P.al et. 2018. Brain tumor type classification via capsule networks. arXiv:1802.10200. https://arxiv.org/abs/1802.10200.Google ScholarGoogle Scholar
  27. [27] Wen Y.al et. 2016. A discriminative feature learning approach for deep face recognition. In ECCV, 499515.Google ScholarGoogle Scholar
  28. [28] Goodfellow I.al et. 2014. Generative adversarial nets. In NIPS, 26722680. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Mirza M. and Osindero S.. 2014. Conditional generative adversarial nets. arXiv:1411.1784. https://arxiv.org/abs/1411.1784.Google ScholarGoogle Scholar
  30. [30] Sivaswamy J.al et. 2014. Drishti-GS: Retinal image dataset for optic nerve head (ONH) segmentation. In ISBI, 5356.Google ScholarGoogle Scholar
  31. [31] Fumero F.al et. 2011. RIM-ONE: An open retinal image database for optic nerve evaluation. In CBMS, 16. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] REFUGE. 2018. Retinal Fundus Glaucoma Challenge. Retrieved 2019 from https://refuge.grand-challenge.org/Home.Google ScholarGoogle Scholar
  33. [33] Sungjoon. 2016. sjchoi86-HRF. Retrieved March 2019 from https://github.com/yiweichen04/retina_dataset.Google ScholarGoogle Scholar
  34. [34] Fluss R.al et. 2005. Estimation of the Youden index and its associated cutoff point. Biometrical Journal 47 (2005), 458472.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Fu H.al et. 2018. Code for Disc-aware Ensemble Network for Glaucoma Screening from Fundus Image. Retrieved 2018 from https://github.com/HzFu/DENet_GlaucomaScreen.Google ScholarGoogle Scholar
  36. [36] Selvaraju R. R.al et. 2017. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In IEEE International Conference on Computer Vision (ICCV’17), 618626.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Glaucoma Assessment from Fundus Images with Fundus to OCT Feature Space Mapping

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Computing for Healthcare
            ACM Transactions on Computing for Healthcare  Volume 3, Issue 1
            January 2022
            255 pages
            ISSN:2691-1957
            EISSN:2637-8051
            DOI:10.1145/3485154
            Issue’s Table of Contents

            Copyright © 2021 Association for Computing Machinery.

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 15 October 2021
            • Accepted: 1 June 2021
            • Revised: 1 February 2021
            • Received: 1 May 2020
            Published in health Volume 3, Issue 1

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Refereed

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Full Text

          View this article in Full Text.

          View Full Text

          HTML Format

          View this article in HTML Format .

          View HTML Format
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!